[go: up one dir, main page]

WO2024225007A1 - Hearable device, integrated circuit, and biological signal measurement system - Google Patents

Hearable device, integrated circuit, and biological signal measurement system Download PDF

Info

Publication number
WO2024225007A1
WO2024225007A1 PCT/JP2024/014279 JP2024014279W WO2024225007A1 WO 2024225007 A1 WO2024225007 A1 WO 2024225007A1 JP 2024014279 W JP2024014279 W JP 2024014279W WO 2024225007 A1 WO2024225007 A1 WO 2024225007A1
Authority
WO
WIPO (PCT)
Prior art keywords
noise reduction
microphone
hearable device
user
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/014279
Other languages
French (fr)
Japanese (ja)
Inventor
真央 勝原
貴規 石川
侑乃丞 北上
陽多 小森谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corp filed Critical Sony Group Corp
Publication of WO2024225007A1 publication Critical patent/WO2024225007A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/251Means for maintaining electrode contact with the body
    • A61B5/256Wearable electrodes, e.g. having straps or bands
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/279Bioelectric electrodes therefor specially adapted for particular uses
    • A61B5/291Bioelectric electrodes therefor specially adapted for particular uses for electroencephalography [EEG]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones

Definitions

  • the present disclosure relates to a hearable device, an integrated circuit, and a biosignal measurement system, and in particular to a hearable device, an integrated circuit, and a biosignal measurement system that enable the acquisition of cleaner biosignals.
  • Non-Patent Document 1 proposes a technology that uses microphone signals to remove noise from in-ear brain waves.
  • Non-Patent Document 1 it is practically difficult to place a microphone directly under the electrode inside the earpiece and use the microphone signal as a reference signal for noise removal.
  • the hearable device of the present disclosure is a hearable device that includes a microphone, an inertial sensor, a biosensor that measures a user's biosignal, and a noise reduction unit that performs noise reduction on the biosignal using an output signal from at least one of the microphone and the inertial sensor as a reference signal.
  • the integrated circuit disclosed herein is an integrated circuit equipped with a noise reduction unit that performs noise reduction on the user's biosignal measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal.
  • the biosignal measurement system disclosed herein includes a hearable device, a noise reduction unit that performs noise reduction on a user's biosignal measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal, a feature extraction unit that extracts features from the biosignal after the noise reduction has been performed, and a user state estimation unit that estimates the state of the user based on the extracted features.
  • noise reduction is performed on the user's biosignal measured by the biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal.
  • FIG. 1 is a block diagram showing an example of the configuration of a biological signal measuring system according to the present disclosure. 13 is a flowchart illustrating the flow of a service providing process.
  • FIG. 1 is a block diagram showing a configuration example of a TWS earphone. 1 is a flowchart illustrating the operation of the TWS earphone.
  • 1A and 1B are diagrams illustrating an example of the external configuration of a microphone.
  • FIG. 2 is a diagram showing an example of microphone arrangement.
  • FIG. 2 is a diagram showing an example of microphone arrangement.
  • EEG electroencephalography
  • PPG photoplethysmography
  • TWS Truste Wireless Stereo
  • Non-Patent Document 1 proposes a technology that uses a microphone signal to remove noise from in-ear brain waves, but the microphone is built into a sponge-like earpiece that is equipped with electrodes, making it difficult to apply to a device designed for listening to music.
  • the output signals from the acceleration sensor and microphone built into many earphones could be made multimodal and used as a reference signal, it would be possible to efficiently remove noise caused by various movements in daily life and obtain only clean biosignals. Furthermore, given limited resources such as space and power consumption, it would be desirable to make maximum use of the information obtained from already implemented functions.
  • the microphone built into earphones is designed to capture sounds in the audible range, it is relatively sensitive to high frequencies, but has low sensitivity to low-frequency noise such as body movements, making it difficult to use it as a reference signal for removing noise from biosignals as is.
  • biosignal measurement system and service provision process according to the present disclosure
  • the biosignal measurement system disclosed herein is able to make the most of information obtained by functions implemented for other purposes in hearable devices such as earphones and headphones, which have limited resources such as space and power consumption. This makes it possible to efficiently remove noise that is mixed into biosignals due to various actions in daily life, obtain only clean biosignals, and provide services using these to users.
  • FIG. 1 is a block diagram showing an example of the configuration of a biosignal measuring system according to the present disclosure.
  • the biosignal measurement system 10 shown in FIG. 1 is composed of a hearable device 100 and a computer 200.
  • the hearable device 100 is configured as an earphone or a headphone that is worn on or around the user's ear.
  • the hearable device 100 may be a canal-type earphone or an in-ear-type earphone.
  • the hearable device 100 may also be a TWS earphone.
  • computer 200 is configured as a smartphone, tablet terminal, PC (Personal Computer), etc. owned by the user.
  • Computer 200 may also be configured as a server on the cloud, etc.
  • the hearable device 100 can transmit various information to the computer 200 via wireless transmission over a wireless communication network such as BLE (Bluetooth Low Energy) (registered trademark) or BAN (Body Area Network).
  • BLE Bluetooth Low Energy
  • BAN Body Area Network
  • the hearable device 100 is configured to include a biosensor 110, an inertial sensor 120, a microphone 130, and an IC (Integrated Circuit) chip 140. Although not shown, the hearable device 100 also includes an acoustic function that outputs music or sound to the user's ears. In addition to the inertial sensor 120 and microphone 130, the hearable device 100 may also include other sensors, such as an impedance sensor.
  • the biosensor 110 is provided inside a housing that constitutes the exterior of the hearable device 100, or in an earpiece or ear pad when the hearable device 100 is configured as a canal-type earphone.
  • the biosensor 110 measures the biosignal of the user wearing the hearable device 100 via electrodes that contact the skin of the user.
  • the biosensor 110 is configured as an EEG sensor or PPG sensor, and measures the user's EEG signal or PPG signal as the biosignal.
  • the inertial sensor 120 is provided inside the housing of the hearable device 100.
  • the inertial sensor 120 is configured as an acceleration sensor and a gyro sensor for tracking the user's behavior, and outputs measurement signals that measure six-axis acceleration and angular velocity as output signals to the IC chip 140.
  • the microphone 130 is provided inside the housing of the hearable device 100.
  • the microphone 130 may be provided in an earpiece or an ear pad.
  • the microphone 130 may be configured, for example, as a feedback microphone (FB microphone) for noise canceling.
  • FB microphone feedback microphone
  • the microphone 130 outputs a picked-up sound signal that picks up external sound from the hearable device 100 to the IC chip 140 as an output signal.
  • the low-frequency characteristics of the microphone 130 are at least -15 dB/Pa or more in order to improve sensitivity to low-frequency noise.
  • the IC chip 140 is configured as an integrated circuit according to the present disclosure, and realizes the function of efficiently removing noise mixed into the biosignal and acquiring only the clean biosignal.
  • the IC chip 140 realizes the noise reduction unit 151 and the feature extraction unit 152 as functional blocks.
  • the noise reduction unit 151 performs noise reduction on the biosignal measured by the biosensor 110, using the output signal from at least one of the inertial sensor 120 and the microphone 130 as a reference signal for noise reduction.
  • the biosignal after noise reduction is supplied to the feature extraction unit 152.
  • output signals from the other sensors described above may be used as reference signals for noise reduction, if necessary.
  • the feature extraction unit 152 extracts various features from the biosignal after noise reduction has been performed by the noise reduction unit 151.
  • the extracted features are transmitted to the computer 200 via wireless transmission and are used to estimate the user's state.
  • Computer 200 is configured to include a CPU (Central Processing Unit) 210.
  • CPU Central Processing Unit
  • the CPU 210 implements various programs (algorithms) and executes them to realize various functions.
  • the CPU 210 realizes a user state estimation unit 221 and an application execution unit 222 as functional blocks.
  • the user state estimation unit 221 estimates the state of the user wearing the hearable device 100 based on various feature amounts transmitted from the hearable device 100.
  • the user state estimation by the user state estimation unit 221 includes estimation of the user's emotions and stress.
  • the application execution unit 222 provides various services based on the estimation results by the user state estimation unit 221.
  • step S11 the noise reduction unit 151 of the hearable device 100 acquires the biosignal measured by the biosensor 110.
  • step S12 the noise reduction unit 151 performs noise reduction on the biosignal using the output signals of the inertial sensor 120 and the microphone 130 as reference signals. Specifically, by using the output signal of the inertial sensor 120, which has high sensitivity to low-frequency noise, as the reference signal, it is possible to remove low-frequency noise that is mixed into the biosignal. In addition, by using the output signal of the microphone 130, which has high sensitivity to high-frequency noise, as the reference signal, it is possible to remove high-frequency noise that is mixed into the biosignal.
  • step S13 the feature extraction unit 152 of the hearable device 100 extracts various features from the biosignal after noise reduction (noise has been removed).
  • the various extracted features are transmitted to the computer 200 by wireless transmission.
  • step S14 the user state estimation unit 221 of the computer 200 estimates the user's state based on the various features transmitted from the hearable device 100.
  • step S15 the application execution unit 222 of the computer 200 provides various services based on the estimated user state.
  • the output signals from the inertial sensor 120 and the microphone 130 as reference signals, it is possible to efficiently remove noise that is mixed into the biosignals due to various movements in daily life. As a result, it is possible to obtain cleaner biosignals, which in turn makes it possible to provide high-quality services.
  • TWS earphones Configuration and operation of TWS earphones
  • a TWS earphone which is one embodiment of the hearable device 100 of the present disclosure, will be described.
  • FIG. 3 is a block diagram showing an example of the configuration of a TWS earphone.
  • the TWS earphone 300 shown in FIG. 3 is configured to include an EEG sensor 310, an IMU 320, a microphone 330, a body movement context estimation unit 351, a noise reduction unit 352, a signal quality estimation unit 353, and a feature extraction unit 354.
  • the EEG sensor 310 corresponds to the biosensor 110 in FIG. 1 and measures EEG signals in the user's ear, for example, via an earpiece-type electrode.
  • the IMU (Inertial Measurement Unit) 320 corresponds to the inertial sensor 120 in FIG. 1, and outputs an acceleration signal that measures the inertial movement of the user as an output signal to the body movement context estimation unit 351 and the noise reduction unit 352.
  • the IMU 320 may be a sensor implemented for tracking the user's behavior, or may be a sensor with desired characteristics that is provided at a desired position for noise reduction separately from the above.
  • the microphone 330 corresponds to the microphone 130 in FIG. 1, and outputs a picked-up sound signal that picks up external sounds from the TWS earphone 300 as an output signal to the body movement context estimation unit 351 and the noise reduction unit 352.
  • the microphone 330 may be an FB microphone implemented for noise cancellation, or may be a microphone with desired characteristics that is provided separately at a desired position for noise reduction. It is desirable for the microphone 330 to have excellent sensitivity in the low range (especially the frequency band (approximately 1 to 40 Hz) related to the biosignal (EEG signal) of interest).
  • the body movement context estimation unit 351, the noise reduction unit 352, the signal quality estimation unit 353, and the feature extraction unit 354 can be realized by the IC chip 140 in FIG. 1.
  • the body movement context estimation unit 351 uses the trained model M1 based on the EEG signal from the EEG sensor 310 and the output signals from the IMU 320 and microphone 330 to estimate a body movement context that classifies the type of body movement of the user.
  • Body movement contexts include, for example, walking, running, speaking, chewing, and resting.
  • Other functions of the TWS earphone 300 or external information obtained from the smartphone (computer 200) that is the communication partner may be used to estimate the body movement context.
  • the noise reduction unit 352 uses the trained model M2 to perform noise reduction according to the body movement context estimated by the body movement context estimation unit 351.
  • the EEG signal after noise reduction is supplied to the signal quality estimation unit 353.
  • the signal quality estimation unit 353 estimates the signal quality of the EEG signal after noise reduction has been performed, assigns a label representing the estimated signal quality to the EEG signal, and supplies it to the feature extraction unit 354.
  • the feature extraction unit 354 extracts features from the EEG signal supplied from the signal quality estimation unit 353, and assigns a reliability to the extracted features based on the estimated signal quality (label).
  • the features to which the reliability has been assigned are transmitted wirelessly to the computer 200 ( Figure 1) and used to estimate the user's state.
  • TWS earphones in action The operation of the TWS earphone 300 will now be described with reference to the flow chart of FIG.
  • step S31 the body movement context estimation unit 351 and the noise reduction unit 352 acquire the EEG signal measured by the EEG sensor 310, and the output signals of the IMU 320 and the microphone 330.
  • the output signals of the IMU 320 and the microphone 330 are assumed to be time-synchronized with the EEG signal.
  • the sampling rate of the output signals of the IMU 320 and the microphone 330 is set to be equal to or higher than that of the EEG signal (preferably 64 Hz or higher).
  • the body movement context estimation unit 351 estimates the user's body movement context based on the EEG signal and the output signal. For example, the user's body movement is classified into body movement contexts such as walking, running, speaking, chewing, and resting based on myoelectric noise mixed into the EEG signal, trunk movement recognized from inertial motion measured by the IMU 320, and housing vibration and voice information input to the microphone 330. The user's body movement may be classified into other body movement contexts such as sleeping as necessary, or may be classified based on the user's situation obtained as the above-mentioned external information (for example, operating the TWS earphone 300, speaking, etc.).
  • body movement contexts such as walking, running, speaking, chewing, and resting based on myoelectric noise mixed into the EEG signal, trunk movement recognized from inertial motion measured by the IMU 320, and housing vibration and voice information input to the microphone 330.
  • the user's body movement may be classified into other body movement contexts such as sleeping as necessary, or may be classified
  • step S33 the noise reduction unit 352 performs noise reduction by selecting an algorithm for performing noise reduction according to the body movement context estimated by the body movement context estimation unit 351. At this time, the noise reduction unit 352 changes the type of output signal used as a reference signal or changes the weighting of the output signal used as a reference signal according to the estimated body movement context.
  • noise reduction is performed using an adaptive filter that uses the output signal from the IMU 320.
  • noise reduction is performed using an adaptive filter that uses the output signal from the microphone 330 or by using multivariate EMD (Empirical Mode Decomposition) analysis.
  • the type of algorithm selected according to the body movement context is not limited to the above, and a DNN (Deep Neural Network) model according to the body movement context may be selected and applied. Noise reduction may also be performed by a single DNN model that also includes estimation of the body movement context. These noise reduction algorithms may be updated periodically, such as when updating firmware, or may be updated automatically by the user adjusting parameters, etc.
  • DNN Deep Neural Network
  • the noise reduction unit 352 may switch the processor (IC) that executes noise reduction depending on the selected algorithm.
  • a control IC may be used for the numerical analysis algorithm, and an AI (Artificial Intelligence) IC may be used for the DNN model. More specifically, an AI IC may be used for the noise reduction algorithm, and a control IC may be used for the feature extraction in the subsequent stage.
  • an AI IC may be used for an algorithm based on a DNN model, and a control IC may be used for an algorithm based on an adaptive filter or multivariate EMD analysis. In this way, adaptive selection of an IC depending on the type of calculation makes it possible to speed up calculations and reduce power consumption.
  • the noise reduction unit 352 may also perform noise reduction using the impedance measurement value of the electrodes of the biosensor (EEG sensor 310) as a reference signal in addition to the output signals of the IMU 320 and microphone 330.
  • step S34 the signal quality estimation unit 353 estimates the signal quality of the EEG signal after noise reduction, and assigns a label representing that signal quality to the EEG signal.
  • step S35 the feature extraction unit 354 extracts features from the labeled EEG signal.
  • the extracted features are assigned a reliability based on the label assigned to the EEG signal, and are transmitted to the computer 200 via wireless transmission.
  • the user state estimation unit 221 estimates the user's emotions using an emotion estimation algorithm and the user's level of mental fatigue using a stress estimation algorithm based on the feature amount and its reliability transmitted from the TWS earphone 300.
  • the application execution unit 222 processes the estimation results of the user state estimation unit 221 and presents them to the user in the form of images, sounds, etc. via a UI (User Interface).
  • UI User Interface
  • the above process performs noise reduction according to the user's body movement context, making it possible to efficiently remove noise that is mixed into biosignals due to various movements in daily life, resulting in the acquisition of cleaner biosignals.
  • wireless transmission is performed after feature extraction.
  • the processing power of the IC chip on the edge terminal (TWS earphone 300) side and the RAM (Random Access Memory) capacity improve, the processing after feature extraction up to estimation of the user's state may be performed on the edge terminal side.
  • the EEG signal and output signal (reference signal) may all be transmitted from the edge terminal to the CPU (computer 200) side, and the processing after estimation of body movement context and noise reduction may be performed on the CPU side.
  • FIG. 5 shows an example of the external configuration of microphone 330.
  • the microphone 330 is configured, for example, by providing an IC chip or the like in a case formed in the shape of a flat rectangular parallelepiped, and a sound pickup hole 330h is formed in part of the case.
  • the microphone 330 may be a MEMS (Micro Electronics Mechanical System) microphone or an electrolytic capacitor microphone.
  • MEMS Micro Electronics Mechanical System
  • electrolytic capacitor microphone When the microphone 330 is configured as an FB microphone for noise canceling, noise cancellation is performed based on the external sound picked up from the sound pickup hole 330h.
  • FIG. 6 shows an example of the placement of the microphone 330 when the TWS earphone 300 is a canal-type earphone.
  • the earphone 300A shown in Figures 6A and 6B is composed of a housing 411 and an earpiece 412.
  • the housing 411 contains a driver unit 421 that converts audio signals into sound.
  • the sound emitted from the driver unit 421 is delivered to the user's ear through a sound guide tube 422.
  • the earpiece 412 is removably attached to the tip of the sound guide tube 422, and has the function of transmitting the sound emitted from the sound guide tube 422 to the ear without distortion while sealing the user's ear and blocking out surrounding sounds.
  • the earpiece 412 also functions as an electrode for the EEG sensor 310.
  • the microphone 330 may be arranged inside the housing 411 and configured as an FB microphone for noise canceling that picks up sound inside the sound guide tube 422.
  • This arrangement takes into account the acoustic characteristics of noise canceling. In this case, costs and power consumption can be reduced compared to a configuration in which a microphone is provided separately from the FB microphone for noise canceling.
  • the microphone 330 may be placed inside the sound guide tube 422 and immediately adjacent to the earpiece 412. This placement is specialized for removing noise contained in the EEG signal from the EEG sensor 310.
  • FIG. 7 shows an example of the placement of the microphone 330 when the TWS earphone 300 is an in-ear type earphone.
  • the earphone 300B shown in FIG. 7 is composed of a housing 431 and an electrode 432.
  • the electrode 432 is provided on the housing 431 at a portion that contacts the concha at the entrance of the user's ear, and functions as an electrode of the EEG sensor 310.
  • the electrode 432 may be made of a conductive resin or a metal.
  • the microphone 330 is simply placed inside the housing 431 and immediately adjacent to the electrode 432. This placement is specialized for removing noise contained in the EEG signal from the EEG sensor 310.
  • the above-described embodiment is configured to obtain EEG signals using TWS earphones and provide services based on the EEG signals.
  • the hardware configuration to which the technology according to the present disclosure can be applied is not limited to earphones, and may be headband-type headphones or a head mounted display (HMD).
  • the biosignal handled by the technology according to the present disclosure is not limited to EEG signals, and may be a biosignal measured by a photoplethysmography sensor, an electromyography sensor, an electrooculography sensor, or the like.
  • the present disclosure can have the following configurations.
  • a microphone and An inertial sensor A biosensor that measures a biosignal of a user; a noise reduction unit that performs noise reduction on the biological signal using an output signal from at least one of the microphone and the inertial sensor as a reference signal.
  • the hearable device according to (1) wherein the microphone is provided inside a housing.
  • the method further includes the steps of: extracting a feature from the biological signal after the noise reduction is performed; The hearable device according to any one of (1) to (3), wherein the feature amount is used to estimate a state of the user.
  • the hearable device wherein the feature amount is transmitted to a computer that provides a service based on the estimated state of the user.
  • the hearable device according to (4) or (5), wherein the estimation of the user's state includes estimation of the user's emotion or estimation of the user's stress.
  • a signal quality estimation unit that estimates a signal quality of the biological signal after the noise reduction is performed, The hearable device according to any one of (4) to (6), wherein the feature extraction unit assigns a reliability based on the estimated signal quality to the feature extracted from the biological signal.
  • a body movement context estimation unit configured to estimate a body movement context of the user based on the biological signal and the output signal, The hearable device according to any one of (1) to (7), wherein the noise reduction unit performs the noise reduction according to the estimated body movement context.
  • the noise reduction unit selects an algorithm for performing the noise reduction in response to the estimated body movement context.
  • the noise reduction unit switches a processor that executes the noise reduction depending on the selected algorithm.
  • the noise reduction unit changes a type or weighting of the output signal to be used as the reference signal in accordance with the estimated body movement context.
  • the hearable device according to any one of (1) to (11), wherein the microphone is provided immediately adjacent to an electrode of the biosensor. (13) The hearable device according to any one of (1) to (12), wherein the microphone is a feedback microphone for noise canceling. (14) The hearable device according to any one of (1) to (13), wherein the noise reduction unit performs the noise reduction using an impedance measurement value of an electrode of the biosensor as the reference signal in addition to the output signal. (15) The hearable device according to any one of (1) to (14), wherein the biosensor includes at least one of an EEG (electroencephalography) sensor and a PPG (photoplethysmography) sensor.
  • EEG electroencephalography
  • PPG photoplethysmography
  • the hearable device according to any one of (1) to (15), wherein the inertial sensor includes at least one of an acceleration sensor and a gyro sensor.
  • the microphone is a Micro Electronics Mechanical System (MEMS) microphone or an electrolytic condenser microphone.
  • MEMS Micro Electronics Mechanical System
  • It can be configured as earphones or headphones.
  • An integrated circuit comprising: a noise reduction unit that performs noise reduction on a user's biosignal measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal.
  • a hearable device a noise reduction unit that performs noise reduction on a biosignal of a user measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal; a feature extraction unit that extracts a feature from the biological signal after the noise reduction is performed; and a user state estimation unit that estimates a state of the user based on the extracted feature amount.
  • Biosignal Measurement System 100 Hearable Device, 110 Biosensor, 120 Inertial Sensor, 130 Microphone, 140 IC Chip, 151 Noise Reduction Unit, 152 Feature Extraction Unit, 200 Computer, 210 CPU, 221 User State Estimation Unit, 222 Application Execution Unit, 300 TWS Earphone, 310 EEG Sensor, 320 IMU, 330 Microphone, 351 Body Movement Context Estimation Unit, 352 Noise Reduction Unit, 353 Signal Quality Estimation Unit, 354 Feature Extraction Unit

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Physiology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Signal Processing (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Acoustics & Sound (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Cardiology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The present disclosure relates to a hearable device, an integrated circuit, and a biological signal measurement system that make it possible to acquire a cleaner biological signal. A noise reduction unit executes noise reduction of a biological signal of a user measured by means of a biological sensor, using an output signal from at least one of a microphone and an inertial sensor as a reference signal. The present disclosure can be applied to a hearable device, such as a TWS earphone.

Description

ヒアラブルデバイス、集積回路、および生体信号計測システムHearable devices, integrated circuits, and biosignal measurement systems

 本開示は、ヒアラブルデバイス、集積回路、および生体信号計測システムに関し、特に、よりクリーンな生体信号の取得を実現できるようにするヒアラブルデバイス、集積回路、および生体信号計測システムに関する。 The present disclosure relates to a hearable device, an integrated circuit, and a biosignal measurement system, and in particular to a hearable device, an integrated circuit, and a biosignal measurement system that enable the acquisition of cleaner biosignals.

 近年、イヤフォンやヘッドフォンなどのヒアラブルデバイスに組み込まれたEEGセンサやPPGセンサなどによる、耳周辺での生体信号計測技術を用いた各種のサービスが多く提唱されつつある。 In recent years, many different services have been proposed that use biosignal measurement technology around the ear using EEG and PPG sensors built into hearable devices such as earphones and headphones.

 非特許文献1には、耳内脳波に対してマイクロフォンの信号を用いてノイズを除去する技術が提案されている。 Non-Patent Document 1 proposes a technology that uses microphone signals to remove noise from in-ear brain waves.

V. Goverdovsky, W. V. Rosenberg, T. Nakamura, D. Looney, D. J. Sharp, C. Papavassiliou, M. J. Morrell, and D. P. Mandic, “Hearables: Multimodal physiological in-ear sensing” Scientific Reports, vol.7, Article no. 6948, 2017V. Goverdovsky, W. V. Rosenberg, T. Nakamura, D. Looney, D. J. Sharp, C. Papavassiliou, M. J. Morrell, and D. P. Mandic, “Hearables: Multimodal physiology in-ear sensing ” Scientific Reports, vol.7, Article no. 6948, 2017

 しかしながら、ヒアラブルデバイスについて実際に想定される使用条件においては、ユーザの体動や外部環境により多くのノイズが混入する可能性がある。また、非特許文献1の技術で例示されているように、イヤピース内部の電極直下にマイクロフォンを設置し、このマイクロフォンの信号をノイズ除去のための参照信号とすることは実用上困難である。 However, in actual expected usage conditions for hearable devices, there is a possibility that a lot of noise may be mixed in due to the user's body movements and the external environment. In addition, as exemplified by the technology in Non-Patent Document 1, it is practically difficult to place a microphone directly under the electrode inside the earpiece and use the microphone signal as a reference signal for noise removal.

 本開示は、このような状況に鑑みてなされたものであり、よりクリーンな生体信号の取得を実現できるようにするものである。 This disclosure has been made in light of these circumstances, and makes it possible to obtain cleaner biosignals.

 本開示のヒアラブルデバイスは、マイクロフォンと、慣性センサと、ユーザの生体信号を計測する生体センサと、前記マイクロフォンと前記慣性センサの少なくともいずれかからの出力信号を参照信号として、前記生体信号のノイズリダクションを実行するノイズリダクション部とを備えるヒアラブルデバイスである。 The hearable device of the present disclosure is a hearable device that includes a microphone, an inertial sensor, a biosensor that measures a user's biosignal, and a noise reduction unit that performs noise reduction on the biosignal using an output signal from at least one of the microphone and the inertial sensor as a reference signal.

 本開示の集積回路は、ヒアラブルデバイスに設けられたマイクロフォンと慣性センサの少なくともいずれかからの出力信号を参照信号として、生体センサにより計測されたユーザの生体信号のノイズリダクションを実行するノイズリダクション部を備える集積回路である。 The integrated circuit disclosed herein is an integrated circuit equipped with a noise reduction unit that performs noise reduction on the user's biosignal measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal.

 本開示の生体信号計測システムは、ヒアラブルデバイスと、前記ヒアラブルデバイスに設けられたマイクロフォンと慣性センサの少なくともいずれかからの出力信号を参照信号として、生体センサにより計測されたユーザの生体信号のノイズリダクションを実行するノイズリダクション部と、前記ノイズリダクションが実行された後の前記生体信号から特徴量を抽出する特徴量抽出部と、抽出された前記特徴量に基づいて、前記ユーザの状態を推定するユーザ状態推定部とを含む生体信号計測システムである。 The biosignal measurement system disclosed herein includes a hearable device, a noise reduction unit that performs noise reduction on a user's biosignal measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal, a feature extraction unit that extracts features from the biosignal after the noise reduction has been performed, and a user state estimation unit that estimates the state of the user based on the extracted features.

 本開示においては、ヒアラブルデバイスに設けられたマイクロフォンと慣性センサの少なくともいずれかからの出力信号を参照信号として、生体センサにより計測されたユーザの生体信号のノイズリダクションが実行される。 In the present disclosure, noise reduction is performed on the user's biosignal measured by the biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal.

本開示の生体信号計測システムの構成例を示すブロック図である。1 is a block diagram showing an example of the configuration of a biological signal measuring system according to the present disclosure. サービス提供処理の流れについて説明するフローチャートである。13 is a flowchart illustrating the flow of a service providing process. TWSイヤフォンの構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of a TWS earphone. TWSイヤフォンの動作について説明するフローチャートである。1 is a flowchart illustrating the operation of the TWS earphone. マイクロフォンの外観構成例を示す図である。1A and 1B are diagrams illustrating an example of the external configuration of a microphone. マイクロフォンの配置例を示す図である。FIG. 2 is a diagram showing an example of microphone arrangement. マイクロフォンの配置例を示す図である。FIG. 2 is a diagram showing an example of microphone arrangement.

 以下、本開示を実施するための形態(以下、実施形態とする)について説明する。なお、説明は以下の順序で行う。 Below, we will explain the form for implementing this disclosure (hereinafter, referred to as the embodiment). The explanation will be given in the following order.

 1.従来技術とその問題点
 2.本開示の生体信号計測システムとサービス提供処理
 3.TWSイヤフォンの構成とその動作
 4.マイクロフォンの外観とその配置例
 5.その他
1. Prior art and problems thereof 2. Biosignal measurement system and service provision process of the present disclosure 3. Configuration and operation of TWS earphones 4. Appearance of microphone and example of its arrangement 5. Others

<1.従来技術とその問題点>
 近年、イヤフォンやヘッドフォンなどのヒアラブルデバイスに組み込まれたEEG(electroencephalography)センサやPPG(photoplethysmography)センサなどによる、耳周辺での生体信号計測技術を用いた各種のサービスが多く提唱されつつある。
1. Conventional technology and its problems
In recent years, various services have been proposed that use biosignal measurement technology around the ear using EEG (electroencephalography) sensors and PPG (photoplethysmography) sensors built into hearable devices such as earphones and headphones.

 しかしながら、ヒアラブルデバイスについて実際に想定される使用条件においては、ユーザの体動や外部環境により多くのノイズが混入する可能性がある。すなわち、日常生活において常にクリーンなデータを取得し、そのデータを用いたサービスを提供することに対して課題があるのが現状である。 However, in the actual expected usage conditions of hearable devices, there is a possibility that a lot of noise may be mixed in due to the user's body movements and the external environment. In other words, there are currently challenges in constantly obtaining clean data in everyday life and providing services that use that data.

 また、TWS(True Wireless Stereo)イヤフォンのような小型軽量であることが求められるデバイスでは、特に限られたスペースと消費電力の中で、リソースを有効活用することが要求される。装着感の良いヒアラブルデバイスにおいて、低消費電力でノイズの少ない生体信号の計測が可能になれば、ユーザは、より簡便に質の良いサービスの提供を受けることができる。 Furthermore, in devices that need to be small and lightweight, such as TWS (True Wireless Stereo) earphones, there is a particular need to make effective use of resources within the limited space and power consumption. If it were possible to measure biosignals with low power consumption and little noise in a comfortable hearable device, users would be able to receive high-quality services more easily.

 非特許文献1には、耳内脳波に対してマイクロフォンの信号を用いてノイズを除去する技術が提案されているが、当該マイクロフォンは、電極が設けられるスポンジ状のイヤピースに内蔵されており、音楽聴取を目的としたデバイスに適用することは難しい。 Non-Patent Document 1 proposes a technology that uses a microphone signal to remove noise from in-ear brain waves, but the microphone is built into a sponge-like earpiece that is equipped with electrodes, making it difficult to apply to a device designed for listening to music.

 多くのイヤフォンに内蔵されている加速度センサとマイクロフォンの出力信号をマルチモーダル化して参照信号として用いることができれば、日常生活での様々な動作によって生じるノイズを効率的に除去し、クリーンな生体信号のみを取得することができる。また、スペースや消費電力といったリソースが限られる中、既に実装されている機能により得られる情報を最大限に活用できることが望ましい。但し、イヤフォンに内蔵されているマイクロフォンは、可聴域の音声を取得することを目的としていることから、比較的高周波に対する感度が高いものの、体の動きのような低域のノイズに対する感度は低いため、そのままでは生体信号に対する、ノイズ除去のための参照信号になりにくかった。 If the output signals from the acceleration sensor and microphone built into many earphones could be made multimodal and used as a reference signal, it would be possible to efficiently remove noise caused by various movements in daily life and obtain only clean biosignals. Furthermore, given limited resources such as space and power consumption, it would be desirable to make maximum use of the information obtained from already implemented functions. However, since the microphone built into earphones is designed to capture sounds in the audible range, it is relatively sensitive to high frequencies, but has low sensitivity to low-frequency noise such as body movements, making it difficult to use it as a reference signal for removing noise from biosignals as is.

<2.本開示の生体信号計測システムとサービス提供処理>
 本開示の生体信号計測システムにおいては、スペースや消費電力といったリソースが限られるイヤフォンやヘッドフォンなどのヒアラブルデバイスにおいて、他の用途で実装されている機能により得られる情報を最大限に活用できるようにする。これにより、日常生活での様々な動作によって生体信号に混入するノイズを効率的に除去し、クリーンな生体信号のみを取得し、これを用いたサービスをユーザに提供する。
2. Biological signal measurement system and service provision process according to the present disclosure
The biosignal measurement system disclosed herein is able to make the most of information obtained by functions implemented for other purposes in hearable devices such as earphones and headphones, which have limited resources such as space and power consumption. This makes it possible to efficiently remove noise that is mixed into biosignals due to various actions in daily life, obtain only clean biosignals, and provide services using these to users.

(生体信号計測システムの構成)
 図1は、本開示の生体信号計測システムの構成例を示すブロック図である。
(Configuration of biosignal measurement system)
FIG. 1 is a block diagram showing an example of the configuration of a biosignal measuring system according to the present disclosure.

 図1に示される生体信号計測システム10は、ヒアラブルデバイス100とコンピュータ200から構成される。 The biosignal measurement system 10 shown in FIG. 1 is composed of a hearable device 100 and a computer 200.

 ヒアラブルデバイス100は、ユーザの耳またはその周辺に装着されるイヤフォンやヘッドフォンなどとして構成される。ヒアラブルデバイス100がイヤフォンとして構成される場合、ヒアラブルデバイス100は、カナル型のイヤフォンであってもよいし、インナーイヤ型のイヤフォンであってもよい。また、ヒアラブルデバイス100は、TWSイヤフォンであってもよい。 The hearable device 100 is configured as an earphone or a headphone that is worn on or around the user's ear. When the hearable device 100 is configured as an earphone, the hearable device 100 may be a canal-type earphone or an in-ear-type earphone. The hearable device 100 may also be a TWS earphone.

 一方、コンピュータ200は、ユーザが所有するスマートフォンやタブレット端末、PC(Personal Computer)などとして構成される。コンピュータ200は、クラウド上のサーバなどとして構成されてもよい。 On the other hand, computer 200 is configured as a smartphone, tablet terminal, PC (Personal Computer), etc. owned by the user. Computer 200 may also be configured as a server on the cloud, etc.

 ヒアラブルデバイス100は、BLE(Bluetooth Low Energy)(登録商標)やBAN(Body Area Network)などの無線通信ネットワークを介した無線伝送により、コンピュータ200に対して各種の情報を送信することができる。 The hearable device 100 can transmit various information to the computer 200 via wireless transmission over a wireless communication network such as BLE (Bluetooth Low Energy) (registered trademark) or BAN (Body Area Network).

 ヒアラブルデバイス100は、生体センサ110、慣性センサ120、マイクロフォン130、およびIC(Integrated Circuit)チップ140を含むように構成される。なお、図示はしないが、ヒアラブルデバイス100には、ユーザの耳に対して楽曲や音声を出力する音響機能も含まれる。また、ヒアラブルデバイス100には、慣性センサ120やマイクロフォン130の他に、インピーダンスセンサなどの他のセンサが含まれていてもよい。 The hearable device 100 is configured to include a biosensor 110, an inertial sensor 120, a microphone 130, and an IC (Integrated Circuit) chip 140. Although not shown, the hearable device 100 also includes an acoustic function that outputs music or sound to the user's ears. In addition to the inertial sensor 120 and microphone 130, the hearable device 100 may also include other sensors, such as an impedance sensor.

 生体センサ110は、ヒアラブルデバイス100の外装を構成する筐体の内部や、ヒアラブルデバイス100がカナル型のイヤフォンとして構成される場合には、イヤピースやイヤパッドに設けられる。生体センサ110は、ヒアラブルデバイス100を装着しているユーザの皮膚に接触する電極を介して、当該ユーザの生体信号を計測する。例えば、生体センサ110は、EEGセンサやPPGセンサなどとして構成され、生体信号として、ユーザのEEG信号やPPG信号を計測する。 The biosensor 110 is provided inside a housing that constitutes the exterior of the hearable device 100, or in an earpiece or ear pad when the hearable device 100 is configured as a canal-type earphone. The biosensor 110 measures the biosignal of the user wearing the hearable device 100 via electrodes that contact the skin of the user. For example, the biosensor 110 is configured as an EEG sensor or PPG sensor, and measures the user's EEG signal or PPG signal as the biosignal.

 慣性センサ120は、ヒアラブルデバイス100の筐体の内部に設けられる。慣性センサ120は、ユーザの行動トラッキング用の加速度センサやジャイロセンサとして構成され、6軸加速度や角速度を計測した計測信号を、出力信号としてICチップ140に出力する。 The inertial sensor 120 is provided inside the housing of the hearable device 100. The inertial sensor 120 is configured as an acceleration sensor and a gyro sensor for tracking the user's behavior, and outputs measurement signals that measure six-axis acceleration and angular velocity as output signals to the IC chip 140.

 マイクロフォン130は、ヒアラブルデバイス100の筐体の内部に設けられる。ヒアラブルデバイス100がカナル型のイヤフォンとして構成される場合には、マイクロフォン130は、イヤピースやイヤパッドに設けられてもよい。マイクロフォン130は、例えば、ノイズキャンセリング用のフィードバックマイクロフォン(FBマイク)として構成されてもよい。マイクロフォン130は、ヒアラブルデバイス100の外部音を収音した収音信号を、出力信号としてICチップ140に出力する。マイクロフォン130の低域特性は、低域のノイズに対する感度を向上させるべく、少なくとも-15dB/Pa以上とされる。 The microphone 130 is provided inside the housing of the hearable device 100. When the hearable device 100 is configured as a canal-type earphone, the microphone 130 may be provided in an earpiece or an ear pad. The microphone 130 may be configured, for example, as a feedback microphone (FB microphone) for noise canceling. The microphone 130 outputs a picked-up sound signal that picks up external sound from the hearable device 100 to the IC chip 140 as an output signal. The low-frequency characteristics of the microphone 130 are at least -15 dB/Pa or more in order to improve sensitivity to low-frequency noise.

 ICチップ140は、本開示に係る集積回路として構成され、生体信号に混入するノイズを効率的に除去し、クリーンな生体信号のみを取得する機能を実現する。ICチップ140は、機能ブロックとして、ノイズリダクション部151と特徴量抽出部152を実現する。 The IC chip 140 is configured as an integrated circuit according to the present disclosure, and realizes the function of efficiently removing noise mixed into the biosignal and acquiring only the clean biosignal. The IC chip 140 realizes the noise reduction unit 151 and the feature extraction unit 152 as functional blocks.

 ノイズリダクション部151は、慣性センサ120とマイクロフォン130の少なくともいずれかからの出力信号をノイズ除去のための参照信号として、生体センサ110により計測された生体信号のノイズリダクションを実行する。ノイズリダクションが実行された後の生体信号は、特徴量抽出部152に供給される。また、ノイズリダクションにおいては、必要に応じて、上述した他のセンサからの出力信号がノイズ除去の参照信号として用いられてもよい。 The noise reduction unit 151 performs noise reduction on the biosignal measured by the biosensor 110, using the output signal from at least one of the inertial sensor 120 and the microphone 130 as a reference signal for noise reduction. The biosignal after noise reduction is supplied to the feature extraction unit 152. In addition, in the noise reduction, output signals from the other sensors described above may be used as reference signals for noise reduction, if necessary.

 特徴量抽出部152は、ノイズリダクション部151によりノイズリダクションが実行された後の生体信号から各種の特徴量を抽出する。抽出された特徴量は、無線伝送によりコンピュータ200に送信され、ユーザの状態推定に用いられる。 The feature extraction unit 152 extracts various features from the biosignal after noise reduction has been performed by the noise reduction unit 151. The extracted features are transmitted to the computer 200 via wireless transmission and are used to estimate the user's state.

 コンピュータ200は、CPU(Central Processing Unit)210を含むように構成される。 Computer 200 is configured to include a CPU (Central Processing Unit) 210.

 CPU210は、各種のプログラム(アルゴリズム)を実装し、これらを実行することで各種の機能を実現する。 The CPU 210 implements various programs (algorithms) and executes them to realize various functions.

 CPU210は、機能ブロックとして、ユーザ状態推定部221とアプリケーション実行部222を実現する。 The CPU 210 realizes a user state estimation unit 221 and an application execution unit 222 as functional blocks.

 ユーザ状態推定部221は、ヒアラブルデバイス100から送信されてきた各種の特徴量に基づいて、ヒアラブルデバイス100を装着しているユーザの状態を推定する。ユーザ状態推定部221によるユーザの状態推定には、ユーザの情動推定やユーザのストレス推定などが含まれる。 The user state estimation unit 221 estimates the state of the user wearing the hearable device 100 based on various feature amounts transmitted from the hearable device 100. The user state estimation by the user state estimation unit 221 includes estimation of the user's emotions and stress.

 アプリケーション実行部222は、ユーザ状態推定部221による推定結果に基づいて、各種のサービスを提供する。 The application execution unit 222 provides various services based on the estimation results by the user state estimation unit 221.

(サービス提供処理の流れ)
 図2のフローチャートを参照して、生体信号計測システム10によるサービス提供処理の流れについて説明する。
(Service provision process flow)
The flow of a service providing process performed by the biological signal measuring system 10 will be described with reference to the flowchart of FIG.

 ステップS11において、ヒアラブルデバイス100のノイズリダクション部151は、生体センサ110により計測されている生体信号を取得する。 In step S11, the noise reduction unit 151 of the hearable device 100 acquires the biosignal measured by the biosensor 110.

 ステップS12において、ノイズリダクション部151は、慣性センサ120とマイクロフォン130の出力信号を参照信号として、生体信号のノイズリダクションを実行する。具体的には、低域のノイズに対する感度が高い慣性センサ120の出力信号を参照信号とすることで、生体信号に混入する低域のノイズを除去することができる。また、高周波のノイズに対する感度が高いマイクロフォン130の出力信号を参照信号とすることで、生体信号に混入する高周波のノイズを除去することができる。 In step S12, the noise reduction unit 151 performs noise reduction on the biosignal using the output signals of the inertial sensor 120 and the microphone 130 as reference signals. Specifically, by using the output signal of the inertial sensor 120, which has high sensitivity to low-frequency noise, as the reference signal, it is possible to remove low-frequency noise that is mixed into the biosignal. In addition, by using the output signal of the microphone 130, which has high sensitivity to high-frequency noise, as the reference signal, it is possible to remove high-frequency noise that is mixed into the biosignal.

 ステップS13において、ヒアラブルデバイス100の特徴量抽出部152は、ノイズリダクション後の(ノイズが除去された)生体信号から各種の特徴量を抽出する。抽出された各種の特徴量は、無線伝送によりコンピュータ200に送信される。 In step S13, the feature extraction unit 152 of the hearable device 100 extracts various features from the biosignal after noise reduction (noise has been removed). The various extracted features are transmitted to the computer 200 by wireless transmission.

 ステップS14において、コンピュータ200のユーザ状態推定部221は、ヒアラブルデバイス100から送信されてきた各種の特徴量に基づいて、ユーザの状態を推定する。 In step S14, the user state estimation unit 221 of the computer 200 estimates the user's state based on the various features transmitted from the hearable device 100.

 ステップS15において、コンピュータ200のアプリケーション実行部222は、推定されたユーザの状態に基づいて、各種のサービスを提供する。 In step S15, the application execution unit 222 of the computer 200 provides various services based on the estimated user state.

 以上の処理によれば、慣性センサ120とマイクロフォン130からの出力信号を参照信号とすることで、日常生活での様々な動作によって生体信号に混入するノイズを効率的に除去することができる。結果として、よりクリーンな生体信号の取得を実現することが可能となり、ひいては、質の高いサービスを提供することが可能となる。 By using the above process, the output signals from the inertial sensor 120 and the microphone 130 as reference signals, it is possible to efficiently remove noise that is mixed into the biosignals due to various movements in daily life. As a result, it is possible to obtain cleaner biosignals, which in turn makes it possible to provide high-quality services.

<3.TWSイヤフォンの構成とその動作>
 ここでは、本開示のヒアラブルデバイス100の一実施形態であるTWSイヤフォンの構成とその動作について説明する。
3. Configuration and operation of TWS earphones
Here, the configuration and operation of a TWS earphone, which is one embodiment of the hearable device 100 of the present disclosure, will be described.

(TWSイヤフォンの構成)
 図3は、TWSイヤフォンの構成例を示すブロック図である。
(TWS earphone configuration)
FIG. 3 is a block diagram showing an example of the configuration of a TWS earphone.

 図3に示されるTWSイヤフォン300は、EEGセンサ310、IMU320、マイクロフォン330、体動コンテキスト推定部351、ノイズリダクション部352、信号品質推定部353、および特徴量抽出部354を含むように構成される。 The TWS earphone 300 shown in FIG. 3 is configured to include an EEG sensor 310, an IMU 320, a microphone 330, a body movement context estimation unit 351, a noise reduction unit 352, a signal quality estimation unit 353, and a feature extraction unit 354.

 EEGセンサ310は、図1の生体センサ110に対応し、例えばイヤピース型の電極を介して、ユーザの耳内においてEEG信号を計測する。 The EEG sensor 310 corresponds to the biosensor 110 in FIG. 1 and measures EEG signals in the user's ear, for example, via an earpiece-type electrode.

 IMU(Inertial Measurement Unit)320は、図1の慣性センサ120に対応し、ユーザの慣性運動を計測した加速度信号を、出力信号として体動コンテキスト推定部351とノイズリダクション部352に出力する。IMU320は、ユーザの行動トラッキング用に実装されているセンサであってもよいし、これらとは別個に、ノイズリダクション用に所望の位置に設けられた所望の特性のセンサであってもよい。 The IMU (Inertial Measurement Unit) 320 corresponds to the inertial sensor 120 in FIG. 1, and outputs an acceleration signal that measures the inertial movement of the user as an output signal to the body movement context estimation unit 351 and the noise reduction unit 352. The IMU 320 may be a sensor implemented for tracking the user's behavior, or may be a sensor with desired characteristics that is provided at a desired position for noise reduction separately from the above.

 マイクロフォン330は、図1のマイクロフォン130に対応し、TWSイヤフォン300の外部音を収音した収音信号を、出力信号として体動コンテキスト推定部351とノイズリダクション部352に出力する。マイクロフォン330は、ノイズキャンセリング用に実装されているFBマイクであってもよいし、これとは別個に、ノイズリダクション用に所望の位置に設けられた所望の特性のマイクロフォンであってもよい。マイクロフォン330は、低域(特に、注目すべき生体信号(EEG信号)に関連する周波数帯域(1乃至40Hz程度))の感度が優れているものが望ましい。 The microphone 330 corresponds to the microphone 130 in FIG. 1, and outputs a picked-up sound signal that picks up external sounds from the TWS earphone 300 as an output signal to the body movement context estimation unit 351 and the noise reduction unit 352. The microphone 330 may be an FB microphone implemented for noise cancellation, or may be a microphone with desired characteristics that is provided separately at a desired position for noise reduction. It is desirable for the microphone 330 to have excellent sensitivity in the low range (especially the frequency band (approximately 1 to 40 Hz) related to the biosignal (EEG signal) of interest).

 体動コンテキスト推定部351、ノイズリダクション部352、信号品質推定部353、および特徴量抽出部354は、図1のICチップ140による実現され得る。 The body movement context estimation unit 351, the noise reduction unit 352, the signal quality estimation unit 353, and the feature extraction unit 354 can be realized by the IC chip 140 in FIG. 1.

 体動コンテキスト推定部351は、EEGセンサ310からのEEG信号と、IMU320とマイクロフォン330からの出力信号に基づいて、学習済みモデルM1を用いることにより、ユーザの体の動きの種類を分類する体動コンテキストを推定する。体動コンテキストには、例えば、歩行や走行、発話や咀嚼、安静などがある。体動コンテキストの推定には、TWSイヤフォン300が有する他の機能や、通信相手となるスマートフォン(コンピュータ200)などにより得られる外部情報が用いられてもよい。 The body movement context estimation unit 351 uses the trained model M1 based on the EEG signal from the EEG sensor 310 and the output signals from the IMU 320 and microphone 330 to estimate a body movement context that classifies the type of body movement of the user. Body movement contexts include, for example, walking, running, speaking, chewing, and resting. Other functions of the TWS earphone 300 or external information obtained from the smartphone (computer 200) that is the communication partner may be used to estimate the body movement context.

 ノイズリダクション部352は、学習済みモデルM2を用いることにより、体動コンテキスト推定部351により推定された体動コンテキストに応じたノイズリダクションを実行する。ノイズリダクションが実行された後のEEG信号は、信号品質推定部353に供給される。 The noise reduction unit 352 uses the trained model M2 to perform noise reduction according to the body movement context estimated by the body movement context estimation unit 351. The EEG signal after noise reduction is supplied to the signal quality estimation unit 353.

 信号品質推定部353は、ノイズリダクションが実行された後のEEG信号の信号品質を推定し、推定された信号品質を表すラベルをEEG信号に付与して、特徴量抽出部354に供給する。 The signal quality estimation unit 353 estimates the signal quality of the EEG signal after noise reduction has been performed, assigns a label representing the estimated signal quality to the EEG signal, and supplies it to the feature extraction unit 354.

 特徴量抽出部354は、信号品質推定部353から供給されたEEG信号から特徴量を抽出し、抽出された特徴量に対して、推定された信号品質(ラベル)に基づいた信頼度を付与する。信頼度が付与された特徴量は、無線伝送によりコンピュータ200(図1)に送信され、ユーザの状態推定に用いられる。 The feature extraction unit 354 extracts features from the EEG signal supplied from the signal quality estimation unit 353, and assigns a reliability to the extracted features based on the estimated signal quality (label). The features to which the reliability has been assigned are transmitted wirelessly to the computer 200 (Figure 1) and used to estimate the user's state.

(TWSイヤフォンの動作)
 図4のフローチャートを参照して、TWSイヤフォン300の動作について説明する。
(TWS earphones in action)
The operation of the TWS earphone 300 will now be described with reference to the flow chart of FIG.

 ステップS31において、体動コンテキスト推定部351とノイズリダクション部352は、EEGセンサ310により計測されているEEG信号、および、IMU320とマイクロフォン330の出力信号を取得する。IMU320とマイクロフォン330の出力信号は、EEG信号と時刻同期されているものとする。IMU320とマイクロフォン330の出力信号のサンプリングレートは、EEG信号と同等以上(好ましくは64Hz以上)とされる。 In step S31, the body movement context estimation unit 351 and the noise reduction unit 352 acquire the EEG signal measured by the EEG sensor 310, and the output signals of the IMU 320 and the microphone 330. The output signals of the IMU 320 and the microphone 330 are assumed to be time-synchronized with the EEG signal. The sampling rate of the output signals of the IMU 320 and the microphone 330 is set to be equal to or higher than that of the EEG signal (preferably 64 Hz or higher).

 ステップS32において、体動コンテキスト推定部351は、EEG信号と出力信号に基づいて、ユーザの体動コンテキストを推定する。例えば、EEG信号に混入する筋電ノイズ、IMU320により計測される慣性運動から認識される体幹の動き、マイクロフォン330に入力される筐体の振動や声の情報などに基づいて、ユーザの体動が、歩行や走行、発話や咀嚼、安静などの体動コンテキストに分類される。ユーザの体動は、必要に応じて、睡眠などの他の体動コンテキストに分類されてもよいし、上述した外部情報として得られるユーザの状況(例えば、TWSイヤフォン300を操作している、発話しているなど)に基づいて、ユーザの体動が分類されてもよい。 In step S32, the body movement context estimation unit 351 estimates the user's body movement context based on the EEG signal and the output signal. For example, the user's body movement is classified into body movement contexts such as walking, running, speaking, chewing, and resting based on myoelectric noise mixed into the EEG signal, trunk movement recognized from inertial motion measured by the IMU 320, and housing vibration and voice information input to the microphone 330. The user's body movement may be classified into other body movement contexts such as sleeping as necessary, or may be classified based on the user's situation obtained as the above-mentioned external information (for example, operating the TWS earphone 300, speaking, etc.).

 ステップS33において、ノイズリダクション部352は、体動コンテキスト推定部351により推定された体動コンテキストに応じて、ノイズリダクションを実行するためのアルゴリズムを選択することで、ノイズリダクションを実行する。このとき、ノイズリダクション部352は、推定された体動コンテキストに応じて、参照信号とする出力信号の種類を変更したり、参照信号とする出力信号の重み付けを変更したりする。 In step S33, the noise reduction unit 352 performs noise reduction by selecting an algorithm for performing noise reduction according to the body movement context estimated by the body movement context estimation unit 351. At this time, the noise reduction unit 352 changes the type of output signal used as a reference signal or changes the weighting of the output signal used as a reference signal according to the estimated body movement context.

 例えば、歩行や走行など、大きく緩やかな体動の場合には、IMU320の出力信号を用いた適応フィルタによるノイズリダクションが実行される。また、発話や咀嚼など、比較的高周波のノイズが支配的な体動の場合には、マイクロフォン330の出力信号を用いた適応フィルタや、多変量EMD(Empirical Mode Decomposition)解析などによるノイズリダクションが実行される。 For example, in the case of large, gentle body movements such as walking or running, noise reduction is performed using an adaptive filter that uses the output signal from the IMU 320. In addition, in the case of body movements in which relatively high-frequency noise is dominant, such as speaking or chewing, noise reduction is performed using an adaptive filter that uses the output signal from the microphone 330 or by using multivariate EMD (Empirical Mode Decomposition) analysis.

 なお、体動コンテキストに応じて選択されるアルゴリズムの種類は、上述したものに限らず、体動コンテキストに応じたDNN(Deep Neural Network)モデルが選択され、適用されてもよい。また、ノイズリダクションが、体動コンテキストの推定も含めた1つのDNNモデルにより実行されるようにもできる。なお、これらのノイズ除去アルゴリズムは、ファームウェアのアップデートの際などに定期的に更新されてよいし、ユーザによるパラメータ調整などを行い自動で更新されてもよい。 The type of algorithm selected according to the body movement context is not limited to the above, and a DNN (Deep Neural Network) model according to the body movement context may be selected and applied. Noise reduction may also be performed by a single DNN model that also includes estimation of the body movement context. These noise reduction algorithms may be updated periodically, such as when updating firmware, or may be updated automatically by the user adjusting parameters, etc.

 これらのアルゴリズムによる信号処理を効率的に実行するために、ノイズリダクション部352は、選択されたアルゴリズムに応じて、ノイズリダクションを実行するプロセッサ(IC)を切り替えるようにしてもよい。例えば、数値解析アルゴリズムには制御用のICを用いるようにし、DNNモデルにはAI(Artificial Intelligence)用のICを用いるようにする。より具体的には、ノイズリダクションアルゴリズムにはAI用のICを用いるようにし、後段の特徴量抽出には制御用のICを用いるようにしてもよい。また、ノイズリダクションアルゴリズムにおいて、DNNモデルなどをベースとしたアルゴリズムにはAI用のICを用いるようにし、適応フィルタや多変量EMD解析などをベースとしたアルゴリズムには制御用のICを用いるようにすることもできる。このように、演算の種類に応じてICを適応的に選択することで、演算の高速化や低消費電力化を図ることが可能となる。 In order to efficiently execute signal processing using these algorithms, the noise reduction unit 352 may switch the processor (IC) that executes noise reduction depending on the selected algorithm. For example, a control IC may be used for the numerical analysis algorithm, and an AI (Artificial Intelligence) IC may be used for the DNN model. More specifically, an AI IC may be used for the noise reduction algorithm, and a control IC may be used for the feature extraction in the subsequent stage. In addition, in the noise reduction algorithm, an AI IC may be used for an algorithm based on a DNN model, and a control IC may be used for an algorithm based on an adaptive filter or multivariate EMD analysis. In this way, adaptive selection of an IC depending on the type of calculation makes it possible to speed up calculations and reduce power consumption.

 また、ノイズリダクション部352は、IMU320とマイクロフォン330の出力信号に加えて、生体センサ(EEGセンサ310)の電極のインピーダンス計測値を参照信号として、ノイズリダクションを実行してもよい。 The noise reduction unit 352 may also perform noise reduction using the impedance measurement value of the electrodes of the biosensor (EEG sensor 310) as a reference signal in addition to the output signals of the IMU 320 and microphone 330.

 このようにして、ノイズリダクションが実行された後、ステップS34において、信号品質推定部353は、ノイズリダクション後のEEG信号の信号品質を推定することで、その信号品質を表すラベルをEEG信号に付与する。 After noise reduction has been performed in this manner, in step S34, the signal quality estimation unit 353 estimates the signal quality of the EEG signal after noise reduction, and assigns a label representing that signal quality to the EEG signal.

 ステップS35において、特徴量抽出部354は、ラベルが付与されたEEG信号から特徴量を抽出する。抽出された特徴量は、そのEEG信号に付与されたラベルに基づいた信頼度が付与されて、無線伝送によりコンピュータ200に送信される。 In step S35, the feature extraction unit 354 extracts features from the labeled EEG signal. The extracted features are assigned a reliability based on the label assigned to the EEG signal, and are transmitted to the computer 200 via wireless transmission.

 このように、特徴量の抽出後に無線伝送が行われるようにすることで、伝送帯域や消費電力に制限のある無線伝送において、コンピュータ200に送信される情報量を削減することができる。また、特徴量にその信頼度を付与することで、コンピュータ200におけるユーザの情動推定やストレス推定の精度を向上させることができる。 In this way, by performing wireless transmission after feature extraction, it is possible to reduce the amount of information sent to computer 200 in wireless transmission that has limitations on transmission bandwidth and power consumption. In addition, by assigning a reliability to the feature, it is possible to improve the accuracy of the user's emotion estimation and stress estimation in computer 200.

 コンピュータ200においては、ユーザ状態推定部221が、TWSイヤフォン300から送信されてきた特徴量とその信頼度に基づいて、情動推定アルゴリズムによりユーザの情動を推定したり、ストレス推定アルゴリズムによりユーザの精神的な疲労度を推定したりする。アプリケーション実行部222は、ユーザ状態推定部221の推定結果を加工し、UI(User Interface)を介して画像や音声などによりユーザに提示する。 In the computer 200, the user state estimation unit 221 estimates the user's emotions using an emotion estimation algorithm and the user's level of mental fatigue using a stress estimation algorithm based on the feature amount and its reliability transmitted from the TWS earphone 300. The application execution unit 222 processes the estimation results of the user state estimation unit 221 and presents them to the user in the form of images, sounds, etc. via a UI (User Interface).

 以上の処理によれば、ユーザの体動コンテキストに応じたノイズリダクションが実行されるので、日常生活での様々な動作によって生体信号に混入するノイズを効率的に除去することができ、結果として、よりクリーンな生体信号の取得を実現することが可能となる。 The above process performs noise reduction according to the user's body movement context, making it possible to efficiently remove noise that is mixed into biosignals due to various movements in daily life, resulting in the acquisition of cleaner biosignals.

 以上においては、無線伝送において伝送帯域や消費電力に制限のあることを鑑みて、特徴量の抽出後に無線伝送が行われるようにした。これに限らず、将来的に、エッジ端末(TWSイヤフォン300)側のICチップの処理能力やRAM(Random Access Memory)容量が向上した場合には、特徴量の抽出より後の、ユーザの状態推定までの処理がエッジ端末側で行われてもよい。また、無線伝送技術が向上した場合には、エッジ端末からEEG信号と出力信号(参照信号)を全てCPU(コンピュータ200)側に送信し、体動コンテキストの推定やノイズリダクション以降の処理がCPU側で実行されてもよい。 In the above, in consideration of the limitations on transmission bandwidth and power consumption in wireless transmission, wireless transmission is performed after feature extraction. Not limited to this, in the future, if the processing power of the IC chip on the edge terminal (TWS earphone 300) side and the RAM (Random Access Memory) capacity improve, the processing after feature extraction up to estimation of the user's state may be performed on the edge terminal side. Also, if wireless transmission technology improves, the EEG signal and output signal (reference signal) may all be transmitted from the edge terminal to the CPU (computer 200) side, and the processing after estimation of body movement context and noise reduction may be performed on the CPU side.

<4.マイクロフォンの外観とその配置例>
 ここで、TWSイヤフォン300が備えるマイクロフォン330の外観とその配置例について説明する。
<4. Microphone appearance and placement examples>
Here, the appearance of the microphone 330 included in the TWS earphone 300 and an example of its arrangement will be described.

 図5は、マイクロフォン330の外観構成例を示す図である。 FIG. 5 shows an example of the external configuration of microphone 330.

 マイクロフォン330は、例えば扁平直方体状に形成されたケース内にICチップなどが設けられて構成され、そのケースの一部に収音孔330hが形成されている。マイクロフォン330は、MEMS(Micro Electronics Mechanical System)マイクロフォンであってもよいし、電解コンデンサマイクロフォンであってもよい。マイクロフォン330が、ノイズキャンセリング用のFBマイクとして構成される場合、収音孔330hから収音された外部音に基づいたノイズキャンセリングが行われる。 The microphone 330 is configured, for example, by providing an IC chip or the like in a case formed in the shape of a flat rectangular parallelepiped, and a sound pickup hole 330h is formed in part of the case. The microphone 330 may be a MEMS (Micro Electronics Mechanical System) microphone or an electrolytic capacitor microphone. When the microphone 330 is configured as an FB microphone for noise canceling, noise cancellation is performed based on the external sound picked up from the sound pickup hole 330h.

 図6は、TWSイヤフォン300がカナル型のイヤフォンである場合のマイクロフォン330の配置例を示す図である。 FIG. 6 shows an example of the placement of the microphone 330 when the TWS earphone 300 is a canal-type earphone.

 図6のA図とB図に示されるイヤフォン300Aは、筐体411とイヤピース412から構成される。 The earphone 300A shown in Figures 6A and 6B is composed of a housing 411 and an earpiece 412.

 筐体411には、音声信号を音に変換するドライバユニット421が内蔵される。ドライバユニット421から発せられた音は、音導管422を通してユーザの耳に届けられる。イヤピース412は、音導管422の先端部に着脱可能に取り付けられ、ユーザの耳を密閉させ周囲の音を遮断しつつ、音導管422から出る音を歪みなく耳に伝える機能を有する。本実施形態のイヤフォン300Aにおいては、イヤピース412は、EEGセンサ310の電極としても機能する。 The housing 411 contains a driver unit 421 that converts audio signals into sound. The sound emitted from the driver unit 421 is delivered to the user's ear through a sound guide tube 422. The earpiece 412 is removably attached to the tip of the sound guide tube 422, and has the function of transmitting the sound emitted from the sound guide tube 422 to the ear without distortion while sealing the user's ear and blocking out surrounding sounds. In the earphone 300A of this embodiment, the earpiece 412 also functions as an electrode for the EEG sensor 310.

 このように構成されるカナル型のイヤフォン300Aにおいて、図6のA図に示されるように、マイクロフォン330は、筐体411の内部に配置され、音導管422内の音を収音するノイズキャンセリング用のFBマイクとして構成されてもよい。この配置は、ノイズキャンセリングの音響特性を踏まえた配置とする。この場合、ノイズキャンセリング用のFBマイクとは別個にマイクロフォンを設ける構成と比較して、コストや消費電力を抑えることができる。 In the canal-type earphone 300A configured in this manner, as shown in FIG. 6A, the microphone 330 may be arranged inside the housing 411 and configured as an FB microphone for noise canceling that picks up sound inside the sound guide tube 422. This arrangement takes into account the acoustic characteristics of noise canceling. In this case, costs and power consumption can be reduced compared to a configuration in which a microphone is provided separately from the FB microphone for noise canceling.

 また、図6のB図に示されるように、マイクロフォン330は、音導管422の内部であってイヤピース412の直近に配置されてもよい。この配置は、EEGセンサ310からのEEG信号に含まれるノイズの除去に特化した配置となる。 Also, as shown in FIG. 6B, the microphone 330 may be placed inside the sound guide tube 422 and immediately adjacent to the earpiece 412. This placement is specialized for removing noise contained in the EEG signal from the EEG sensor 310.

 図7は、TWSイヤフォン300がインナーイヤ型のイヤフォンである場合のマイクロフォン330の配置例を示す図である。 FIG. 7 shows an example of the placement of the microphone 330 when the TWS earphone 300 is an in-ear type earphone.

 図7に示されるイヤフォン300Bは、筐体431と電極432から構成される。 The earphone 300B shown in FIG. 7 is composed of a housing 431 and an electrode 432.

 電極432は、筐体431においてユーザの耳の入り口にある耳甲介に接触する部分に設けられ、EEGセンサ310の電極として機能する。電極432は、導電性樹脂により形成されてもよいし、金属により形成されてもよい。 The electrode 432 is provided on the housing 431 at a portion that contacts the concha at the entrance of the user's ear, and functions as an electrode of the EEG sensor 310. The electrode 432 may be made of a conductive resin or a metal.

 このように構成されるインナーイヤ型のイヤフォン300Bにおいて、マイクロフォン330は、筐体431の内部であって電極432の直近に配置されればよい。この配置は、EEGセンサ310からのEEG信号に含まれるノイズの除去に特化した配置となる。 In the inner-ear type earphone 300B configured in this manner, the microphone 330 is simply placed inside the housing 431 and immediately adjacent to the electrode 432. This placement is specialized for removing noise contained in the EEG signal from the EEG sensor 310.

<5.その他>
 上述した実施形態は、TWSイヤフォンによるEEG信号の取得とそれに基づいたサービスの提供を実現する構成とした。本開示に係る技術を適用可能なハードウェア構成は、イヤフォンに限らず、ヘッドバンド型のヘッドフォンやHMD(Head Mounted Display)などであってもよい。また、本開示に係る技術において扱われる生体信号は、EEG信号に限らず、光電脈波センサや筋電センサ、眼電位センサなどにより計測される生体信号であってもよい。
<5. Other>
The above-described embodiment is configured to obtain EEG signals using TWS earphones and provide services based on the EEG signals. The hardware configuration to which the technology according to the present disclosure can be applied is not limited to earphones, and may be headband-type headphones or a head mounted display (HMD). Furthermore, the biosignal handled by the technology according to the present disclosure is not limited to EEG signals, and may be a biosignal measured by a photoplethysmography sensor, an electromyography sensor, an electrooculography sensor, or the like.

 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。 Note that the effects described in this specification are merely examples and are not limiting, and other effects may also be present.

 また、本開示に係る技術を適用した実施の形態は、上述した実施の形態に限定されるものではなく、本開示に係る技術の要旨を逸脱しない範囲において種々の変更が可能である。 Furthermore, the embodiments to which the technology disclosed herein is applied are not limited to the above-described embodiments, and various modifications are possible without departing from the spirit of the technology disclosed herein.

 さらに、本開示は以下のような構成をとることができる。
(1)
 マイクロフォンと、
 慣性センサと、
 ユーザの生体信号を計測する生体センサと、
 前記マイクロフォンと前記慣性センサの少なくともいずれかからの出力信号を参照信号として、前記生体信号のノイズリダクションを実行するノイズリダクション部と
 を備えるヒアラブルデバイス。
(2)
 前記マイクロフォンは、筐体の内部に設けられる
 (1)に記載のヒアラブルデバイス。
(3)
 前記マイクロフォンの低域特性は、少なくとも-15dB/Pa以上である
 (1)または(2)に記載のヒアラブルデバイス。
(4)
 前記ノイズリダクションが実行された後の前記生体信号から特徴量を抽出する特徴量抽出部をさらに備え、
 前記特徴量は、前記ユーザの状態推定に用いられる
 (1)乃至(3)のいずれかに記載のヒアラブルデバイス。
(5)
 前記特徴量は、推定された前記ユーザの状態に基づいたサービスを提供するコンピュータに送信される
 (4)に記載のヒアラブルデバイス。
(6)
 前記ユーザの状態推定は、前記ユーザの情動推定または前記ユーザのストレス推定を含む
 (4)または(5)に記載のヒアラブルデバイス。
(7)
 前記ノイズリダクションが実行された後の前記生体信号の信号品質を推定する信号品質推定部をさらに備え、
 前記特徴量抽出部は、前記生体信号から抽出された前記特徴量に、推定された前記信号品質に基づいた信頼度を付与する
 (4)乃至(6)のいずれかに記載のヒアラブルデバイス。
(8)
 前記生体信号と前記出力信号に基づいて、前記ユーザの体動コンテキストを推定する体動コンテキスト推定部をさらに備え、
 前記ノイズリダクション部は、推定された前記体動コンテキストに応じた前記ノイズリダクションを実行する
 (1)乃至(7)のいずれかに記載のヒアラブルデバイス。
(9)
 前記ノイズリダクション部は、推定された前記体動コンテキストに応じて、前記ノイズリダクションを実行するためのアルゴリズムを選択する
 (8)に記載のヒアラブルデバイス。
(10)
 前記ノイズリダクション部は、選択された前記アルゴリズムに応じて、前記ノイズリダクションを実行するプロセッサを切り替える
 (9)に記載のヒアラブルデバイス。
(11)
 前記ノイズリダクション部は、推定された前記体動コンテキストに応じて、前記参照信号とする前記出力信号の種類または重み付けを変更する
 (8)乃至(10)のいずれかに記載のヒアラブルデバイス。
(12)
 前記マイクロフォンは、前記生体センサの電極の直近に設けられる
 (1)乃至(11)のいずれかに記載のヒアラブルデバイス。
(13)
 前記マイクロフォンは、ノイズキャンセリング用のフィードバックマイクロフォンである
 (1)乃至(12)のいずれかに記載のヒアラブルデバイス。
(14)
 前記ノイズリダクション部は、前記出力信号に加えて、前記生体センサの電極のインピーダンス計測値を前記参照信号として、前記ノイズリダクションを実行する
 (1)乃至(13)のいずれかに記載のヒアラブルデバイス。
(15)
 前記生体センサは、EEG(electroencephalography)センサとPPG(photoplethysmography)センサの少なくともいずれかを含む
 (1)乃至(14)のいずれかに記載のヒアラブルデバイス。
(16)
 前記慣性センサは、加速度センサとジャイロセンサの少なくともいずれかを含む
 (1)乃至(15)のいずれかに記載のヒアラブルデバイス。
(17)
 前記マイクロフォンは、MEMS(Micro Electronics Mechanical System)マイクロフォンまたは電解コンデンサマイクロフォンである
 (1)乃至(16)のいずれかに記載のヒアラブルデバイス。
(18)
 イヤフォンまたはヘッドフォンとして構成される。
 (1)乃至(17)のいずれかに記載のヒアラブルデバイス。
(19)
 ヒアラブルデバイスに設けられたマイクロフォンと慣性センサの少なくともいずれかからの出力信号を参照信号として、生体センサにより計測されたユーザの生体信号のノイズリダクションを実行するノイズリダクション部
 を備える集積回路。
(20)
 ヒアラブルデバイスと、
 前記ヒアラブルデバイスに設けられたマイクロフォンと慣性センサの少なくともいずれかからの出力信号を参照信号として、生体センサにより計測されたユーザの生体信号のノイズリダクションを実行するノイズリダクション部と、
 前記ノイズリダクションが実行された後の前記生体信号から特徴量を抽出する特徴量抽出部と、
 抽出された前記特徴量に基づいて、前記ユーザの状態を推定するユーザ状態推定部と
 を含む生体信号計測システム。
Furthermore, the present disclosure can have the following configurations.
(1)
A microphone and
An inertial sensor;
A biosensor that measures a biosignal of a user;
a noise reduction unit that performs noise reduction on the biological signal using an output signal from at least one of the microphone and the inertial sensor as a reference signal.
(2)
The hearable device according to (1), wherein the microphone is provided inside a housing.
(3)
The hearable device according to (1) or (2), wherein the microphone has low-frequency characteristics of at least -15 dB/Pa or more.
(4)
The method further includes the steps of: extracting a feature from the biological signal after the noise reduction is performed;
The hearable device according to any one of (1) to (3), wherein the feature amount is used to estimate a state of the user.
(5)
The hearable device according to (4), wherein the feature amount is transmitted to a computer that provides a service based on the estimated state of the user.
(6)
The hearable device according to (4) or (5), wherein the estimation of the user's state includes estimation of the user's emotion or estimation of the user's stress.
(7)
A signal quality estimation unit that estimates a signal quality of the biological signal after the noise reduction is performed,
The hearable device according to any one of (4) to (6), wherein the feature extraction unit assigns a reliability based on the estimated signal quality to the feature extracted from the biological signal.
(8)
a body movement context estimation unit configured to estimate a body movement context of the user based on the biological signal and the output signal,
The hearable device according to any one of (1) to (7), wherein the noise reduction unit performs the noise reduction according to the estimated body movement context.
(9)
The hearable device according to (8), wherein the noise reduction unit selects an algorithm for performing the noise reduction in response to the estimated body movement context.
(10)
The hearable device according to (9), wherein the noise reduction unit switches a processor that executes the noise reduction depending on the selected algorithm.
(11)
The hearable device according to any one of (8) to (10), wherein the noise reduction unit changes a type or weighting of the output signal to be used as the reference signal in accordance with the estimated body movement context.
(12)
The hearable device according to any one of (1) to (11), wherein the microphone is provided immediately adjacent to an electrode of the biosensor.
(13)
The hearable device according to any one of (1) to (12), wherein the microphone is a feedback microphone for noise canceling.
(14)
The hearable device according to any one of (1) to (13), wherein the noise reduction unit performs the noise reduction using an impedance measurement value of an electrode of the biosensor as the reference signal in addition to the output signal.
(15)
The hearable device according to any one of (1) to (14), wherein the biosensor includes at least one of an EEG (electroencephalography) sensor and a PPG (photoplethysmography) sensor.
(16)
The hearable device according to any one of (1) to (15), wherein the inertial sensor includes at least one of an acceleration sensor and a gyro sensor.
(17)
The hearable device according to any one of (1) to (16), wherein the microphone is a Micro Electronics Mechanical System (MEMS) microphone or an electrolytic condenser microphone.
(18)
It can be configured as earphones or headphones.
A hearable device according to any one of (1) to (17).
(19)
An integrated circuit comprising: a noise reduction unit that performs noise reduction on a user's biosignal measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal.
(20)
A hearable device,
a noise reduction unit that performs noise reduction on a biosignal of a user measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal;
a feature extraction unit that extracts a feature from the biological signal after the noise reduction is performed;
and a user state estimation unit that estimates a state of the user based on the extracted feature amount.

 10 生体信号計測システム, 100 ヒアラブルデバイス, 110 生体センサ, 120 慣性センサ, 130 マイクロフォン, 140 ICチップ, 151 ノイズリダクション部, 152 特徴量抽出部, 200 コンピュータ, 210 CPU, 221 ユーザ状態推定部, 222 アプリケーション実行部, 300 TWSイヤフォン, 310 EEGセンサ, 320 IMU, 330 マイクロフォン, 351 体動コンテキスト推定部, 352 ノイズリダクション部, 353 信号品質推定部, 354 特徴量抽出部 10 Biosignal Measurement System, 100 Hearable Device, 110 Biosensor, 120 Inertial Sensor, 130 Microphone, 140 IC Chip, 151 Noise Reduction Unit, 152 Feature Extraction Unit, 200 Computer, 210 CPU, 221 User State Estimation Unit, 222 Application Execution Unit, 300 TWS Earphone, 310 EEG Sensor, 320 IMU, 330 Microphone, 351 Body Movement Context Estimation Unit, 352 Noise Reduction Unit, 353 Signal Quality Estimation Unit, 354 Feature Extraction Unit

Claims (20)

 マイクロフォンと、
 慣性センサと、
 ユーザの生体信号を計測する生体センサと、
 前記マイクロフォンと前記慣性センサの少なくともいずれかからの出力信号を参照信号として、前記生体信号のノイズリダクションを実行するノイズリダクション部と
 を備えるヒアラブルデバイス。
A microphone and
An inertial sensor;
A biosensor that measures a biosignal of a user;
a noise reduction unit that performs noise reduction on the biological signal using an output signal from at least one of the microphone and the inertial sensor as a reference signal.
 前記マイクロフォンは、筐体の内部に設けられる
 請求項1に記載のヒアラブルデバイス。
The hearable device according to claim 1 , wherein the microphone is provided inside a housing.
 前記マイクロフォンの低域特性は、少なくとも-15dB/Pa以上である
 請求項1に記載のヒアラブルデバイス。
The hearable device according to claim 1 , wherein the microphone has low-frequency characteristics of at least −15 dB/Pa or more.
 前記ノイズリダクションが実行された後の前記生体信号から特徴量を抽出する特徴量抽出部をさらに備え、
 前記特徴量は、前記ユーザの状態推定に用いられる
 請求項1に記載のヒアラブルデバイス。
The method further includes the steps of: extracting a feature from the biological signal after the noise reduction is performed;
The hearable device according to claim 1 , wherein the feature amount is used to estimate a state of the user.
 前記特徴量は、推定された前記ユーザの状態に基づいたサービスを提供するコンピュータに送信される
 請求項4に記載のヒアラブルデバイス。
The hearable device according to claim 4 , wherein the feature amount is transmitted to a computer that provides a service based on the estimated state of the user.
 前記ユーザの状態推定は、前記ユーザの情動推定または前記ユーザのストレス推定を含む
 請求項4に記載のヒアラブルデバイス。
The hearable device according to claim 4 , wherein the estimation of the user's state includes estimation of the user's emotion or estimation of the user's stress.
 前記ノイズリダクションが実行された後の前記生体信号の信号品質を推定する信号品質推定部をさらに備え、
 前記特徴量抽出部は、前記生体信号から抽出された前記特徴量に、推定された前記信号品質に基づいた信頼度を付与する
 請求項4に記載のヒアラブルデバイス。
A signal quality estimation unit that estimates a signal quality of the biological signal after the noise reduction is performed,
The hearable device according to claim 4 , wherein the feature extracting unit assigns a reliability based on the estimated signal quality to the feature extracted from the biological signal.
 前記生体信号と前記出力信号に基づいて、前記ユーザの体動コンテキストを推定する体動コンテキスト推定部をさらに備え、
 前記ノイズリダクション部は、推定された前記体動コンテキストに応じた前記ノイズリダクションを実行する
 請求項1に記載のヒアラブルデバイス。
a body movement context estimation unit configured to estimate a body movement context of the user based on the biological signal and the output signal,
The hearable device according to claim 1 , wherein the noise reduction unit performs the noise reduction in accordance with the estimated body movement context.
 前記ノイズリダクション部は、推定された前記体動コンテキストに応じて、前記ノイズリダクションを実行するためのアルゴリズムを選択する
 請求項8に記載のヒアラブルデバイス。
The hearable device of claim 8 , wherein the noise reduction unit selects an algorithm for performing the noise reduction depending on the estimated body movement context.
 前記ノイズリダクション部は、選択された前記アルゴリズムに応じて、前記ノイズリダクションを実行するプロセッサを切り替える
 請求項9に記載のヒアラブルデバイス。
The hearable device according to claim 9 , wherein the noise reduction unit switches a processor that executes the noise reduction depending on the selected algorithm.
 前記ノイズリダクション部は、推定された前記体動コンテキストに応じて、前記参照信号とする前記出力信号の種類または重み付けを変更する
 請求項8に記載のヒアラブルデバイス。
The hearable device according to claim 8 , wherein the noise reduction unit changes a type or a weighting of the output signal used as the reference signal, depending on the estimated body movement context.
 前記マイクロフォンは、前記生体センサの電極の直近に設けられる
 請求項1に記載のヒアラブルデバイス。
The hearable device according to claim 1 , wherein the microphone is provided in close proximity to an electrode of the biosensor.
 前記マイクロフォンは、ノイズキャンセリング用のフィードバックマイクロフォンである
 請求項1に記載のヒアラブルデバイス。
The hearable device of claim 1 , wherein the microphone is a feedback microphone for noise cancellation.
 前記ノイズリダクション部は、前記出力信号に加えて、前記生体センサの電極のインピーダンス計測値を前記参照信号として、前記ノイズリダクションを実行する
 請求項1に記載のヒアラブルデバイス。
The hearable device according to claim 1 , wherein the noise reduction unit performs the noise reduction using an impedance measurement value of an electrode of the biosensor as the reference signal in addition to the output signal.
 前記生体センサは、EEG(electroencephalography)センサとPPG(photoplethysmography)センサの少なくともいずれかを含む
 請求項1に記載のヒアラブルデバイス。
The hearable device according to claim 1 , wherein the biosensor includes at least one of an electroencephalography (EEG) sensor and a photoplethysmography (PPG) sensor.
 前記慣性センサは、加速度センサとジャイロセンサの少なくともいずれかを含む
 請求項1に記載のヒアラブルデバイス。
The hearable device according to claim 1 , wherein the inertial sensor includes at least one of an acceleration sensor and a gyro sensor.
 前記マイクロフォンは、MEMS(Micro Electronics Mechanical System)マイクロフォンまたは電解コンデンサマイクロフォンである
 請求項1に記載のヒアラブルデバイス。
The hearable device according to claim 1 , wherein the microphone is a Micro Electronics Mechanical System (MEMS) microphone or an electrolytic condenser microphone.
 イヤフォンまたはヘッドフォンとして構成される。
 請求項1に記載のヒアラブルデバイス。
It can be configured as earphones or headphones.
The hearable device according to claim 1 .
 ヒアラブルデバイスに設けられたマイクロフォンと慣性センサの少なくともいずれかからの出力信号を参照信号として、生体センサにより計測されたユーザの生体信号のノイズリダクションを実行するノイズリダクション部
 を備える集積回路。
An integrated circuit comprising: a noise reduction unit that performs noise reduction on a user's biosignal measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal.
 ヒアラブルデバイスと、
 前記ヒアラブルデバイスに設けられたマイクロフォンと慣性センサの少なくともいずれかからの出力信号を参照信号として、生体センサにより計測されたユーザの生体信号のノイズリダクションを実行するノイズリダクション部と、
 前記ノイズリダクションが実行された後の前記生体信号から特徴量を抽出する特徴量抽出部と、
 抽出された前記特徴量に基づいて、前記ユーザの状態を推定するユーザ状態推定部と
 を含む生体信号計測システム。
A hearable device,
a noise reduction unit that performs noise reduction on a biosignal of a user measured by a biosensor using an output signal from at least one of a microphone and an inertial sensor provided in the hearable device as a reference signal;
a feature extraction unit that extracts a feature from the biological signal after the noise reduction is performed;
and a user state estimation unit that estimates a state of the user based on the extracted feature amount.
PCT/JP2024/014279 2023-04-27 2024-04-08 Hearable device, integrated circuit, and biological signal measurement system Pending WO2024225007A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023-073646 2023-04-27
JP2023073646 2023-04-27

Publications (1)

Publication Number Publication Date
WO2024225007A1 true WO2024225007A1 (en) 2024-10-31

Family

ID=93256280

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/014279 Pending WO2024225007A1 (en) 2023-04-27 2024-04-08 Hearable device, integrated circuit, and biological signal measurement system

Country Status (1)

Country Link
WO (1) WO2024225007A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009195590A (en) * 2008-02-25 2009-09-03 Seiko Epson Corp Biological information processing apparatus and method and program for controlling biological information processing apparatus
US20180116514A1 (en) * 2016-11-02 2018-05-03 Bragi GmbH Earpiece with in-ear electrodes
US20210022641A1 (en) * 2018-04-12 2021-01-28 The Regents Of The University Of California Wearable multi-modal bio-sensing system
CN113425276A (en) * 2021-06-28 2021-09-24 南昌勤胜电子科技有限公司 Heart rate monitoring method, earphone and computer storage medium
WO2023032281A1 (en) * 2021-08-30 2023-03-09 ソニーグループ株式会社 Information processing device, information processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009195590A (en) * 2008-02-25 2009-09-03 Seiko Epson Corp Biological information processing apparatus and method and program for controlling biological information processing apparatus
US20180116514A1 (en) * 2016-11-02 2018-05-03 Bragi GmbH Earpiece with in-ear electrodes
US20210022641A1 (en) * 2018-04-12 2021-01-28 The Regents Of The University Of California Wearable multi-modal bio-sensing system
CN113425276A (en) * 2021-06-28 2021-09-24 南昌勤胜电子科技有限公司 Heart rate monitoring method, earphone and computer storage medium
WO2023032281A1 (en) * 2021-08-30 2023-03-09 ソニーグループ株式会社 Information processing device, information processing method, and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAEMMERER CHRISTOPH: "Optical Heart Rate Measurement at the Earbud", WWW.ANALOG.COM, 1 April 2019 (2019-04-01), pages 1 - 12, XP093230769, Retrieved from the Internet <URL:https://www.analog.com/en/resources/technical-articles/optical-heart-rate-measurement-at-the-earbud.html>> *

Similar Documents

Publication Publication Date Title
US20240236547A1 (en) Method and system for collecting and processing bioelectrical and audio signals
US10743121B2 (en) Hearing assistance device with brain computer interface
CN111512646B (en) Method and apparatus for low-latency audio enhancement
US20220141601A1 (en) Portable system for gathering and processing data from eeg, eog, and/or imaging sensors
US20220218941A1 (en) A Wearable System for Behind-The-Ear Sensing and Stimulation
US9906872B2 (en) Hearing-aid noise reduction circuitry with neural feedback to improve speech comprehension
US12256198B2 (en) Control of parameters of hearing instrument based on ear canal deformation and concha EMG signals
Pisha et al. A wearable, extensible, open-source platform for hearing healthcare research
US10213157B2 (en) Active unipolar dry electrode open ear wireless headset and brain computer interface
US20210352417A1 (en) Hearing device configured to utilize non-audio information to process audio signals
EP3873110B1 (en) Hearing aid determining turn-taking
EP3570740B1 (en) Apparatus and method for using imagined direction to perform at least one action
CN109495806B (en) Earphone noise reduction system and method, earphone, computer equipment and medium
US20230240610A1 (en) In-ear motion sensors for ar/vr applications and devices
US11265661B1 (en) Hearing aid comprising a record and replay function
US20240284085A1 (en) Context-based user availability for notifications
CN115516873A (en) Posture detection system of wearable equipment of individual head
CN117156364A (en) Hearing aid or hearing aid system including sound source localization estimator
EP4472494A1 (en) In-ear microphones for ar/vr applications and devices
WO2024225007A1 (en) Hearable device, integrated circuit, and biological signal measurement system
JP2018099239A (en) Biological signal processing apparatus, program, and method using filter processing and frequency determination processing
EP4465660A1 (en) Hearing assistance devices with dynamic gain control based on detected chewing or swallowing
CN219478120U (en) Double-microphone bone conduction earphone
US20250071490A1 (en) Hearing loss emulation via neural networks
WO2023150146A1 (en) In-ear motion sensors for ar/vr applications and devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24796763

Country of ref document: EP

Kind code of ref document: A1