WO2024176149A1 - Method and system for determining facial movements - Google Patents
Method and system for determining facial movements Download PDFInfo
- Publication number
- WO2024176149A1 WO2024176149A1 PCT/IB2024/051684 IB2024051684W WO2024176149A1 WO 2024176149 A1 WO2024176149 A1 WO 2024176149A1 IB 2024051684 W IB2024051684 W IB 2024051684W WO 2024176149 A1 WO2024176149 A1 WO 2024176149A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sensors
- sensor
- subject
- sensing device
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/25—Bioelectric electrodes therefor
- A61B5/279—Bioelectric electrodes therefor specially adapted for particular uses
- A61B5/291—Bioelectric electrodes therefor specially adapted for particular uses for electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/25—Bioelectric electrodes therefor
- A61B5/279—Bioelectric electrodes therefor specially adapted for particular uses
- A61B5/296—Bioelectric electrodes therefor specially adapted for particular uses for electromyography [EMG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/25—Bioelectric electrodes therefor
- A61B5/279—Bioelectric electrodes therefor specially adapted for particular uses
- A61B5/297—Bioelectric electrodes therefor specially adapted for particular uses for electrooculography [EOG]: for electroretinography [ERG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/6803—Head-worn items, e.g. helmets, masks, headphones or goggles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/6815—Ear
- A61B5/6816—Ear lobe
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/6815—Ear
- A61B5/6817—Ear canal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the invention relates to determining facial movements, and more specifically to wearable systems and methods for determining facial movements.
- Facial movements including eye movements, are generated as a result of messages sent from the brain to the muscles of the face.
- the messages are sent in the form of electrochemical impulses from the motor cortex of the brain, and communicated by nerves to activate the appropriate muscle or muscle group.
- the electrochemical signals may have a detectable signature, and the activation of the muscle may itself have a detectable signature. These signatures can be detected, for example by biopotential sensors placed on the skin. Signals corresponding to different facial movements can be differentiated by the nature and location of the signals, which may permit a determination of which facial movement has occurred.
- Head-mounted sensors have been developed. Some of these systems have drawbacks, such as being bulky or inconvenient to wear, or having one or more cameras in the wearer’s line of sight that obstruct the wearer’s vision.
- Some types of sensors particularly those used to gather data for input to an artificial intelligence (Al), use gel or wet electrodes to improve the signal quality, which may be unpleasant or otherwise undesirable for the user.
- Al artificial intelligence
- a method of identifying a facial gesture performed by a user includes: receiving at least one signal input from at least one sensor in contact with a head of the user; identifying at least one gesture of the user corresponding with the at least one received signal; mapping the at least one identified gesture to at least one action to be taken; and transmitting command to an electronic device to perform the action.
- identifying the at least one gesture is performed by a neural network (NN).
- NN neural network
- the NN comprises a transformer.
- the neural network comprises at least one of: a deep NN, a convolutional NN (CNN), and a long short-term memory (LSTM) NN.
- CNN convolutional NN
- LSTM long short-term memory
- the at least one gesture comprises at least one of: a blink; a wink; a jaw movement; and a head movement.
- the at least one sensor is included in a wearable sensing device.
- the at least one sensor is disposed near or in an ear of the user.
- the at least one sensor comprises at least one of: an electroencephalography (EEG) sensor; an electrooculography (EOG) sensor; and an electromyography (EMG) sensor.
- the senor is capable of detecting an electrochemical or biopotential signal.
- the senor comprises one or more of: a conductive polymer; a conductive filler material; carbon nanotubes; and silver nanoparticles.
- the at least one sensor is configured to detect a specific facial gesture of the user.
- a system of identifying a facial gesture performed by a user comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer-readable instructions; the processor, upon executing the instructions, being configured for: receiving at least one signal input from at least one sensor in contact with a head of the user; identifying at least one gesture of the user corresponding with the at least one received signal; mapping the at least one identified gesture to at least one action to be taken; and transmitting command to an electronic device to perform the action.
- identifying the at least one gesture is performed by a neural network (NN).
- NN neural network
- the NN comprises a transformer.
- the neural network comprises at least one of: a deep NN, a convolutional NN (CNN), and a long short-term memory (LSTM) NN.
- CNN convolutional NN
- LSTM long short-term memory
- the at least one gesture comprises at least one of: a blink; a wink; a jaw movement; and a head movement.
- the at least one sensor is included in a wearable sensing device. [0027] In some embodiments, the at least one sensor is disposed near or in an ear of the user.
- the at least one sensor comprises at least one of: an electroencephalography (EEG) sensor; an electrooculography (EOG) sensor; and an electromyography (EMG) sensor.
- EEG electroencephalography
- EOG electrooculography
- EMG electromyography
- the senor is capable of detecting an electrochemical or biopotential signal.
- the senor comprises one or more of: a conductive polymer; a conductive filler material; carbon nanotubes; and silver nanoparticles.
- the at least one sensor is configured to detect a specific facial gesture of the user.
- a method of training a neural network (NN) for identifying a facial gesture includes: collecting training data from at least one performing one or more predetermined facial gestures; preprocessing the training data; and providing the preprocessed training data as an input to the NN.
- the training data comprises sensor data from one or more sensors of at least one wearable sensing device worn by the at least one user.
- the at least one user is an intended user of the wearable sensing device.
- preprocessing the training data comprises using at least one transformer to determine at least one temporal parameter of the training data.
- preprocessing the training data comprises dividing the training data into a plurality of time windows.
- system of training a neural network (NN) for identifying a facial gesture comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer-readable instructions; the processor, upon executing the instructions, being configured for: collecting training data from at least one performing one or more predetermined facial gestures; preprocessing the training data; and providing the preprocessed training data as an input to the NN.
- NN neural network
- the training data comprises sensor data from one or more sensors of at least one wearable sensing device worn by at least one user.
- the at least one user is an intended user of the wearable sensing device.
- preprocessing the training data comprises using at least one transformer to determine at least one temporal parameter of the training data.
- preprocessing the training data comprises dividing the training data into a plurality of time windows.
- a wearable sensing device comprising: an elongated body extending between a first end and a second end and comprising a body inner face, the body inner face facing a subject when the wearable sensing device is worn by the subject; a first temple arm extending from the first end of the elongated body and a second temple arm extending from the second end of the elongated body, the first temple arm and the second temple arm each for engaging a respective ear of the subject; a bridge body mounted to the elongated body, the bridge body being configured for engaging a nose of the subject and comprising a first arm and a second arm each for abutting a respective side of the nose, each one of the first arm and the second arm comprising a respective arm inner face for engagement with the nose of the subject; at least one first sensor mounted to the inner face of the elongated body so as to engage a forehead of the subject when the wearable sensing device is worn by the subject, the at least one first sensor for
- the elongated body comprises a protrusion projecting from the inner face thereof, the at least one first sensor being mounted to the protrusion.
- the protrusion is provided with a curved shape.
- the protrusion is located substantially in the middle of the elongated body.
- wearable sensing device further comprising at least one third sensor mounted to at least one of the first temple arm and the second temple arm for engaging a skull of the subject when the wearable sensing device is worn by the subject, the at least one third sensor for measuring at least one of an additional ground biosignal and an additional reference biosignal.
- the at least one third sensor comprises two biosignal sensors each mounted to a respective one of the first temple and the second temple arm.
- the at least one first sensor, the at least one second sensor and the at least one third sensor each comprise at least one of an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and an electromyography (EMG) sensor.
- EEG electroencephalography
- EOG electrooculography
- EMG electromyography
- the first temple arm and the second temple arm are each rotatably mounted to the elongated body.
- a method for operating sensors to obtain biosignals comprising: receiving impedance values of a plurality of sensors, each measured at a respective position on a body of a subject, the plurality of sensors being located at the respective position on the body of the subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
- said determining a lowest impedance value comprises determining two lowest impedance values and said identifying a given one of the plurality of sensors comprises identifying two given ones of the plurality of sensors.
- said operating the given one of the plurality of sensors comprises operating a first one of the two given ones as a ground sensor to obtain the ground biosignal, and a second one of two given ones as a reference to obtain the reference biosignal.
- the method further comprising: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
- a system for operating sensors to obtain biosignals comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer- readable instructions; the processor, upon executing the instructions, being configured for: receiving impedance values of a plurality of sensors, each measured at a respective position on a body of a subject, the plurality of sensors being located at the respective position on the body of the subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
- the processor is configured for determining two lowest impedance values and said identifying a given one of the plurality of sensors comprises identifying two given ones of the plurality of sensors [0057] In some embodiments, the processor is configured for operating a first one of the two given ones as a ground sensor to obtain the ground biosignal, and operating a second one of two given ones as a reference sensor to obtain the reference biosignal.
- the processor is further configured for: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
- a system for obtaining biosignals comprising: a wearable sensing device mountable to a head of a user and comprising a plurality of sensors each to be located at a respective position on the head when the wearable sensing device is worn by the user; a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer- readable instructions; the processor, upon executing the instructions, being configured for: receiving impedance values of the plurality of sensors, each measured at a respective position on a body of a subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to
- the wearable sensing device comprises: at least one earcup for engaging an ear of a subject and comprising an inner edge, the inner edge facing the ear of the subject when the wearable sensing device is worn by the subject; a support body coupled to the at least one earcup for holding the at least one earcup over the ear of the subject when the wearable sensing device is worn by the subject; and wherein the plurality of sensors are located on the inner edge of the earcup.
- the at least one earcup comprises two earcups.
- At least one sensor of the plurality of sensors is located on a mastoid bone of the subject.
- at least one of the plurality of sensors is located in front of the ear, aligned with an eye of the subject.
- the support body engages a head portion of the subject when the wearable sensing device is worn by the subject and comprises an inner surface, at least one of the plurality of sensors being located on the inner surface of the support body.
- the wearable sensing device comprises: at least one earpiece comprising a earbud configured for insertion in an ear canal of a subject, the earbud comprising an outer portion configured for facing and at least partially engaging the ear canal of the subject when the wearable sensing device is worn by the subject, wherein the plurality of sensors are located on the outer portion of the earbud.
- the wearable sensing device comprises: at least one earpiece comprising a earbud configured for insertion in an ear canal of a subject, the earbud comprising a top portion, a bottom portion opposite to the top portion and an outer wall extending between the top portion and the bottom portion, the outer wall configured for facing and at least partially engaging the ear canal of the user when the wearable sensing device is worn by the user, wherein the plurality of sensors are mounted on the outer wall of the earbud so as to engage with the ear canal when the earbud is worn by the user.
- the plurality of sensors extend longitudinally on the outer wall between the top portion and the bottom portion.
- the plurality of sensors have a respective radial position about the outer wall.
- the plurality of sensors have a dot shape.
- the plurality of sensors have a respective radial position about the outer wall and a longitudinal position between the top portion and the bottom portion.
- the plurality of sensors extend along a circumference of the outer wall between the top portion and the bottom portion. [0072] In some embodiments, the plurality of sensors have a respective longitudinal position between the top portion and the bottom portion.
- the plurality of sensors have a circular shape.
- the wearable sensing device comprises: a head mounted display for engaging a portion of face of a subject and comprising an inner edge, the inner edge facing the portion of face of the subject when the wearable sensing device is worn by the subject; a support body coupled to the head mounted display for engaging a head portion of the subject and for holding the head mounting display over the portion of face of the subject when the wearable sensing device is worn by the subject and comprising and inner surface; and wherein the plurality of sensors are located on at least one of the inner edges of the head mounted display and the inner surface of the support body.
- the processor is further configured for: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
- each one of the plurality of sensors comprises one of an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor, and an electromyography (EMG) sensor.
- EEG electroencephalography
- EOG electrooculography
- EMG electromyography
- sensors and electrodes
- sensors can be used interchangeably. While the term “sensor” is employed more frequently throughout the disclosure, it is contemplated that this choice of terminology is not meant to limit the scope of the present disclosure. Both sensors and electrodes encompass the devices or components, which are designed to detect and/or measure biosignals by interacting with at least a body part of a user.
- FIG. 1 is a block diagram of an electronic device, according to various embodiments of the present disclosure
- FIG. 2 is a schematic diagram of the face muscles of a human
- FIG. 3 illustrates a wearable sensing device having a shape comparable to that of a pair of glasses, according to various embodiments of the present disclosure
- FIG. 4 illustrates a wearable sensing device comprising a pair of earpieces, according to various embodiments of the present disclosure
- FIG. 5 illustrates a first exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure
- FIG. 6 illustrates a second exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure
- FIG. 7 illustrates a third exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure
- FIG. 8 illustrates a first exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure
- FIG. 9 illustrates a first exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure
- FIG. 10A illustrates a wearable sensing device having a shape comparable to that of a pair of headphones comprising earcups, according to various embodiments of the present disclosure
- FIG. 10B illustrates the earcups of the wearable sensing device of FIG. 10A
- FIG. 11A illustrates a wearable sensing device having a shape comparable to that of a virtual/augmented reality gear, according to various embodiments of the present disclosure
- FIGs. 11B-11C illustrate a support body that is coupled to the wearable sensing device of FIG. 11 A, according to various embodiments of the present disclosure
- FIG. 12 illustrates a flowchart of a method of operating sensors to obtain biosignals, according to various embodiments of the present disclosure
- FIG. 13 illustrates a flowchart of a method of operating a wearable sensing device, according to various embodiments of the present disclosure
- FIG. 14 illustrates a flowchart of a method of identifying a gesture or movement of a user, according to various embodiments of the present disclosure.
- FIG. 15 illustrates a block diagram of a computing device, according to various embodiments of the present disclosure.
- the signals may be detected by sensors/electrodes positioned in one or more locations on the head of the wearer, for example in and around the ears, on the bridge of the nose, or in any other suitable locations. These sensors can detect one or more of eye movements, facial movements, or mouth and tongue movements. Furthermore, because of the effect of regional movements produced by other areas of the body, gross movements from other regions of the body can be sensed from the head by interpreting the shadow of their signals. As a result, in some embodiments the received signals may allow the determination and identification of noise artifacts produced by different parts of the body, including step tracking, head tracking, speech decoding, and others.
- noise artifacts from body movements can be observed in the electroencephalography (EEG) signals from the head.
- EEG electroencephalography
- the sensors can optionally be tuned to optimize their mechanical, electrical, and chemical properties to target specific muscles or muscle groups in order to more effectively identify them.
- the facial movements of a user can be determined by sensors placed in or around the ear, or elsewhere on the head. By modifying the approach of collecting and decoding these signals, the determinations of these sensors can be optimized to detect the movements more efficiently, from which a variety of useful commands can be created.
- facial movement detection has applications for humancomputer interaction.
- a seamless method can be provided for a user to interact with devices hands-free and voice-free, by using subtle facial movements, such as eye movement or blinking, to generate commands for an electronic device.
- the technology described herein may be applied in assistive technologies.
- People with movement disabilities such as paralysis, paraplegia, or quadriplegia
- These people may rely on mouth sticks or switches to engage with devices; however, the embodiments disclosed herein provide a method for those with such disabilities to interact with their devices or connected environments, such as virtual keyboards, electric wheelchairs, feeding machines, home assistance devices, and others, using just the signals produced from their brain, as well as the limited muscles over which they still have control.
- Some embodiments disclosed herein may be used for determining the emotional and physical state of a user for medical or other purposes.
- Some embodiments disclosed herein may be used for the creation of digital avatars that reflect not only the physical activity of their real counterparts, but also their expression, attention, and emotion.
- a system detects and maps signals from the brain and body to actionable insights to be used by devices.
- sensor locations should be considered that would be acceptable by consumers for daily use. These include locations generally accepted to be in contact with existing devices that are worn on the head. However, some existing headsets require attachment to the user's face, which may cause visual occlusion. Some ear-based sensing techniques have been proposed to address this problem. For example, Gruebler, A.et al., (2014), Design of a Wearable sensing device for Reading Positive Expressions from Facial EMG Signals, IEEE Transactions on Affective Computing, 5(3), 227-237, (“Gruebler”) describes a facial motion detector positioned around the ear with built-in EMG electrodes.
- Gruebler discloses that EMG electrodes, even sitting on less mobile locations of the face (e.g. the side of the face), can detect signature patterns related to certain facial expressions. However, this approach may result in lower accuracy due to the increased distance between the sensors and the source of the signal on the face.
- One alternative to tracking facial movements is known as “hearable” devices, which combine conventional sound-listening earphones with various biosensing systems.
- “hearable” devices which combine conventional sound-listening earphones with various biosensing systems.
- Manabe H. et al., (2013), Conductive rubber electrodes for earphone-based eye gesture input interface, Proceedings of the 17th Annual International Symposium on International Symposium on Wearable Computers - ISWC ’ 13, 33, (“Manabe”) describes earphones containing electrooculography (EOG) electrodes made of conductive rubber to track eye movement from inside the ear.
- EOG electrooculography
- Taniguchi K. et al., (2016), Earable TEMPO: A Novel, Hands-Free Input Device that Uses the Movement of the Tongue Measured with a Wearable Ear Sensor, Sensors, 18(3), 733, (“Taniguchi”) describes using earphone-type photo sensors with infrared light emitting diodes (LEDs) and phototransistors to recognize tongue movements, allowing the users to control a music player by pushing the tongue against the roof of the mouth. Amesaka, T.
- LEDs infrared light emitting diodes
- a wearable sensing device 100 receives one or more sensor inputs 101.
- the sensors/electrodes may be one or more of the following types: EEG sensors located in or around the ear; EOG sensors located in or around the ear; electromyography (EMG) sensors located in or around the ear; inertial sensors such as gyroscopes or accelerometers; directional or omnidirectional microphones mounted to the device; or external microphones. Additional sensor types will be described below or may be apparent to persons skilled in the art.
- the sensor inputs 101 may be indicative of eye movements or other facial movements of the wearer of the wearable sensing device, as will be discussed below in further detail.
- the detected signals may be electrochemical in nature, and propagated outward to the skin, where they can be noninvasively detected using biopotential sensors.
- Some conventional sensors use a conductive metal in conjunction with a silver-silver chloride solution which can be suitable for some medical applications.
- other types of sensors have been developed to be more suitable for commercial use, for example being easier to use and having lower maintenance.
- One example is a conductive polymer sensor, where one or more biocompatible polymer matrices, such as PDMS, TPU, or others, are combined with one or more conductive fdler materials, such as carbon nanotubes, silver nanoparticles of various shapes, or others.
- Biopotential sensors and other biosensors capture the signals and send them to specialized hardware and software, where they are processed to determine the intended signal or movement using proprietary algorithmic and artificial intelligence methods.
- Individual sensors, or sets of sensors may be optimized to detect specific signals. When signals are sent through the nervous system to affect different areas or groups of the human body, specific sensors and receptors are used, and thus, a unique electrochemical signature is used to command different muscles or muscle groups.
- the ionic conduction of these signals through the body which also propagate through outward to the skin, can be detected by ionic sensors placed on the surface of the skin. These ionic sensors can be tuned to detect the unique electrochemistry of the action potentials, or biopotentials, generated by these specific movements.
- one or more ionic sensors can be capable of differentiating the unique signals detected at the same locations corresponding to different muscle movements.
- the electrical properties of the sensor can be tuned to become more sensitive to the target signal by manipulating its composition.
- the sensors may be chosen to be dry, wet, or semi-wet sensors, and their composition may be chosen to include one or more of silver/silver-chloride, gels, hydrogels, polymers, conductive elements or fillers, conductive coatings, or other materials, depending on the particular signal or signals to which the sensors should be sensitive.
- a sensor for detecting brain activity may be made with a conductive polymer material.
- a suitable sensor for detecting muscle activity may be made with a conductive polymer material, and may be designed or tuned (for example by selecting the conductive polymer material) to detect or be particularly sensitive to one or more particular muscles, muscle groups, or biosignals.
- the number, location, and geometry of the sensors may also be selected to have increased sensitivity to one or more particular muscles, muscle groups, or biosignals.
- sensors may include strain gauges, LED lights, IR lights, microphones, biopotential sensors, inertial measurement units, and temperature sensors. Additional suitable types of sensors may be known to persons skilled in the art.
- one or more brain sensors or one or more muscle sensors may operate in combination, or have their received signals combined, for example to increase the accuracy of the sensor signals or to detect a greater number of signals.
- sensors can be optimized for sensing signals at certain locations on the skin by optimizing their geometry.
- sensors for in-ear applications can be designed with an ear tip geometry, similar to the shape of commercial audio devices.
- sensors can be designed with prongs that are long enough to penetrate through the hair to reach the skin.
- sensors can be designed with specific armatures to provide consistent contact with the skin along the path of the arm.
- sensors on the body, head, or face can be optimized. Although a sensor would typically detect a better quality signal by being placed closer to the source of the signal, this is not always convenient for the wearer of the device.
- sensors placed on the cheek or jaw might not be desirable for consumer applications.
- sensors for mouth movements can be placed in more discreet or unobtrusive locations, such as around the ear or across the scalp. These locations can be optimized for detecting the movement of specific muscle groups, for example by placing the sensor along a different portion of the target muscle group or along a nerve path leading to the target muscle.
- a higher quality signal may be detected, even while placing the sensors in a position more convenient to the wearer of the device.
- the sensor inputs are processed, preferably in real-time, by one or more processors 102 which may perform operations including digital signal processing algorithms and machine learning. The processing will be described below in further detail.
- the processing generates one or more outputs 103, which can be used for a variety of different applications.
- the outputs 103 could include, but are not limited to, one or more of: blink detection, attention, attention direction, intention, and preset electronic commands that may be used to control an electronic device.
- the wearable sensing device 100 implements a method of understanding one or more of the eye movements, facial movements and expressions, emotional state, and mental state of a user using a combination of brain and biosignals, which can be collected via a single system or set of sensors without requiring any intentional input from the user.
- the sensors may detect movements of one or more of the muscle groups 201-208 of the facial musculature 200 of the user.
- the materials used for the sensors can be specifically optimized to sense from a specific muscle, muscle group, or biosignal of interest.
- the brain sensors can operate in isolation, or in combination with other brain sensors, and/or in combination with other bio-sensors, in a sensor fusion approach, to increase accuracy and/or ability to detect different signals.
- the sensors can be positioned in different configurations, such that the sensors are in locations that can be conveniently incorporated into headwom devices, and can be configured in locations that are distant from the sources of signal.
- signals or movements are produced by the body.
- One or more biopotential sensors or other biosensors capture a signal, and process the signal to determine one or more intended commands or movements using artificial intelligence methods that will be described below in further detail. These intended commands or movements are then mapped to different operations, based on the specific application engaged.
- Applications can range from assistive devices, to consumer electronics, to connected devices, to virtual avatars, and to other applications.
- the wearable sensing device 100 may provide a hands-free, voice-free method for users with movement disabilities or limited control over bodily movements to interact with and send commands to assistive, connected, or personal electronic devices. This may allow the user to operate a device to perform a desired function, by the deliberate use of a muscle or muscle group over which the user has control.
- the wearable sensing device 100 may permit a user to control an electronic device solely by way of the one or more sensors, thereby enabling electronic devices to be designed with fewer or no buttons, switches, or other hardware dedicated solely to the user interface, thereby potentially reducing costs and enabling innovative product designs or improved miniaturization.
- the wearable sensing device 300 includes an elongated body 302, a bridge body 310, first and second temple arms 314 and 316 and a plurality of sensors/electrodes 324, 326, 328.
- the elongated body 302 extends longitudinally between a first end 304 and a second end 306. In one embodiment, the elongated body 302 may extend substantially linearly between the first end 304 and the second end 306. In other embodiments, the elongated body 302 may have any suitable shape, such as the shape of a full-rim frame, a half-rim frame, a browline frame, a round frame, a rectangular frame, a square frame, or the like. The manner in which the elongated body 302 is implemented should not limit the scope of the present disclosure.
- the elongated body 302 comprises a top face 308-1, a bottom face 308-2 opposite to the top face 308-1, a front face 308-3 and an inner face 308-4 opposite to the front face 308-3.
- the inner face 308-4 faces the user’s face when the wearable sensing device 300 is worn by the user.
- the elongated body 302 is provided with at least one sensor 324 mounted on the inner face 308-4 thereof.
- the sensor 324 is mounted on the inner face 308-4 so as to engage or be in physical contact with the forehead of the user when the sensing device 300 is worn by the user.
- the sensor 324 is located substantially at the middle of the length of the elongated body 302 so that the sensor 324 be substantially aligned with the nose of the user when the sensing device 300 is worn by the user.
- the sensor 324 may be located at any other adequate position on the inner face 308-4.
- the elongated body 302 is provided with a plurality of sensors 324 each mounted at a respective position along the length of the inner face 308-4.
- the elongated body 302 comprises a protrusion 322 projecting from the inner face 308-4 thereof and the sensor 324 is mounted to the protrusion 322 so as to engage or be in physical contact with the forehead of the user when the sensing device 300 is worn by the user.
- the protrusion 322 is located substantially at the middle of the length of the elongated body 302, it should be understood that the protrusion 322 with the sensor 324 mounted thereto can be located at any adequate position along the length of the inner face 308-4.
- the elongated body 302 may be provided with more than one protrusion 322 each provided with a respective sensor 324.
- the protrusion 322 is shaped and sized so as the sensor 324 mounted thereto will engage or be in physical contact with the forehead of the user when the sensing device 300 is worn by the user.
- the protrusion may ensure a better connection between the sensor 324 and the forehead of the user.
- the protrusion 322 includes a recess configured for receiving therein the sensor 324.
- the protrusion 322 comprises a fastening mechanism to which the sensor 324 may be fastened. The manner in which the protrusion 322 facilitates holding of the sensors should not limit the scope of present disclosure.
- the protrusion 322 is integral with the elongated body 302. In other embodiments, the protrusion 322 may be attached or mounted to the elongated body 302 using any adequate securing method.
- the protrusion 322 is provided with a curved or rounded shape.
- the wearable sensing device 300 includes abridge body 310 mounted to the elongated body 302.
- the bridge body 310 is configured for engaging the nose of the user, i.e. engaging opposite sides of the nose of the user.
- the bridge body 302 projects from the bottom face 308-2 of the elongated body 302.
- the bridge body 310 may be mounted to the elongated body 302 using any suitable mechanism, for example integral molded connection, saddle bridge, screw connection, snap-fit connection, hinged connection, keyhole bridge, magnetic connection, adhesive bonding, or any other suitable technique.
- the bridge body 310 comprises a first arm 312-1 and a second arm 312-2 each having an end mounted to the elongates body 302.
- the first arm 312-1 and the second arm 312-3 are each shaped and sized to abut a respective side of the nose of the user.
- each one of the first arm 312-1 and the second arm 312-2 comprises an arm inner face 318 for engagement with its respective side of the nose of the user.
- the bridge body 310 is provided with at least one sensor 320 mounted on the arm inner face 318 of at least one of its arms 312-1 and 312- 2.
- the sensor 320 is mounted on the arm inner face 318 so as to engage or be in physical contact with a side of the user’s nose when the sensing device 300 is worn by the user.
- the sensor 320 is located on the arm inner face 318 so as to face the nasal bone when the sensing device 300 is worn by the user.
- the arm inner face 318 of each arm 312-1 and 312-2 is provided with a respective sensor 320 so that the bridge body 310 be provided with at least two sensors 320.
- the wearable sensing device 300 further comprises a first temple arm 314 and a second temple arm 316 which each extends longitudinally along a respective axis.
- the first temple arm 314 projects from the first end 304 of the elongated body 302 and the second temple arm 316 projects from the second end 306 of the elongated body 302.
- the first temple arm 314 and the second temple arm 316 are each shaped and sized so at to engage the top of a respective ear of the user when the wearable sensing device 300 is worn by the user.
- the first temple arm 314 and the second temple arm 316 may include hinges or flexible materials to allow for adjustment and a customized fit with respect to the elongated body 302.
- Hinges connecting the first temple arm 314 and the second temple arm 316 to the elongated body 302 may enable the first temple arm 314 and the second temple arm 316 to fold inward for compact storage.
- the hinges connecting the first temple arm 314 and the second temple arm 316 to the elongated body 302 may enable the first temple arm 314 and the second temple arm 316 to be rotatable with respect to the elongated body.
- Hinges may include any suitable mechanism for example, barrel hinges, spring hinges for flexibility, or any other suitable mechanism.
- the first temple arm 314 and/or the second temple arm 314 is provided with a sensor 326, 328.
- the sensor 326, 328 is mounted on the temple arm 314, 316 so as to engage or be in physical contact with the skull of the user when the wearable sensing device 300 is worn by the user.
- the sensor 326, 328 may be mounted on a bottom face or an inner face of the temple arm 314, 316 so that sensor 326, 328 be located substantially on top of a ear of the user when sensor 326, 328 is mounted on the temple arm 314, 316 when the wearable sensing device 300 is worn by the user.
- the first temple arm 314 is provided with a sensor 326 mounted to a bottom face thereof and the second temple arm 316 is provided with a sensor 328 mounted to a bottom face thereof.
- Each sensor 326, 328 is located at a position along the length of its respective arm 314, 316 that is chosen so that the sensor 326, 328 engages a respective section of the skull of the user that is located on top of a respective ear, when the wearable sensing device 300 is worn by the user.
- the wearable sensing device 300 may include at least one lens of any suitable type, which are not shown to more clearly illustrate other features of the wearable sensing device 300. It should be understood that the wearable sensing device 300 may comprise further components such as a battery for powering the sensors, a processor, communication means, etc.
- the senor(s) 320 is(are) to be used as active sensor(s) while the sensor 324 is to be used as a reference and/or ground sensor.
- the sensor 320 is configured for measuring a biosignal which is to be used as an active signal while the sensor 324 is configured for measuring another biosignal which is to be used as a reference and/or ground signal.
- the sensor 326, 328 may also be used as a reference and/or ground sensor to generate a reference and/or ground signal.
- the sensor 326, 328 may also be used as an active sensor to generate an active signal.
- the wearable sensing device 300 may collect signals indicative of a user’s facial movements using the sensors 320, 322, 324, and 328 without providing more obstruction to the user’s vision than a pair of glasses.
- the wearable sensing device 300 may provide the biosignals for processing.
- the processing may include storing the biosignals, displaying the biosignals, performing various biopotential measurements, and/or the like.
- FIG. 4 there is illustrated a first exemplary wearable sensing device 400 having a shape comparable to that of a pair of earpieces 401 each comprising a earbud 404 insertable into a ear canal of a user.
- the wearable sensing device 400 may include a plurality of sensors/electrodes, such as one or more active sensors 402 in an inner portion 404 of one or both earpieces 401 that would be positioned within the ear canal of the user, one or more reference sensors 406 on an outer portion 408 of one earpiece 401 that would be in contact with the outer ear of the user, and one or more ground sensors 410 on an outer portion 412 of the other earpiece 401. It is contemplated that other arrangements of sensors may be used. In this arrangement, the wearable sensing device 400 may collect signals indicative of a user’s facial movements using the sensors 402, 406, 410, while being comfortably worn by the user.
- FIG. 5 there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 420 is illustrated.
- the earbud 420 extends longitudinally between a top portion or end 422 and a bottom portion or end 424 and an outer wall 426 extends between the top and bottom portions 422 and 424.
- the cross-section of the wall 426 taken at any position along the longitudinal axis is provided with a substantially circular shape.
- the outer wall 426 faces and at least partially engages the ear canal of the user when the wearable sensing device 420 is worn by the user.
- a plurality of sensors/electrodes 428, 430, and 432 are mounted on the external face of the wall 426 so as to engage with the ear canal when the earbud 420 is worn by the user.
- the sensors 428, 430, and 432 each have a linear shape, extend longitudinally on the wall 426 between the top and bottom ends 422 and 424 and have a respective radial position about the wall 426.
- the sensors 428, 430, and 432 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal.
- the sensors 428, 430, and 432 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor.
- EEG electroencephalography
- EOG electrooculography
- EMG electromyography
- the wearable sensing device 420 may collect signals indicative of a user’s facial movements using the sensors 428, 430, 432, while being comfortably worn by the user for example.
- the sensors 428, 430, and 432 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal.
- the sensors 428 may be active sensors and the sensors 430 may be ground sensors while the sensors 432 are reference sensors.
- the sensors 428, 430, and 432 do not have a predefined function and their function is dynamically assigned using the method described above for example.
- FIG 6 there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 440 is illustrated.
- the earbud 440 extends longitudinally between a top portion or end 442 and a bottom portion or end 444 and an outer wall 446 extends between the top and bottom portions 442 and 444.
- the cross-section of the wall 446 taken at any position along the longitudinal axis is provided with a substantially circular shape.
- the outer wall 446 faces and at least partially engages the ear canal of the user when the wearable sensing device 440 is worn by the user.
- a plurality of sensors/electrodes 448, 450, and 452 are mounted on the external face of the wall 446 so as to engage with the ear canal when the earbud 440 is worn by the user.
- the sensors 448, 450, and 452 each have a linear shape, extend longitudinally on the wall 446 between the top and bottom ends 442 and 444 and have a respective radial position about the wall 446.
- the sensors 448, 450, and 452 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal.
- the sensors 448, 450, and 452 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor.
- EEG electroencephalography
- EOG electrooculography
- EMG electromyography
- the wearable sensing device 440 may collect signals indicative of a user’s facial movements using the sensors 448, 450, and 452, while being comfortably worn by the user for example.
- the sensors 448, 450, and 452 each have a width that is larger than the width of the sensors 428, 430, and 432.
- the sensors 448, 450, and 452 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal.
- the sensors 448 may be active sensors and the sensors 450 may be ground sensors while the sensors 452 are reference sensors.
- the sensors 448, 450, and 452 do not have a predefined function and their function is dynamically assigned using the method described above for example.
- FIG. 7 there is illustrated a further exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 460 is illustrated.
- the earbud 460 extends longitudinally between a top portion or end 462 and a botom portion or end 464 and an outer wall 466 extends between the top and botom portions 462 and 464.
- the cross-section of the wall 466 taken at any position along the longitudinal axis is provided with a substantially circular shape.
- the outer wall 466 faces and at least partially engages the ear canal of the user when the wearable sensing device 460 is worn by the user.
- a plurality of sensors/electrodes 468, 470, and 472 are mounted on the external face of the wall 466 so as to engage with the ear canal when the earbud 460 is worn by the user.
- the sensors 468, 470, and 472 each have a dot shape, and are located a respective radial position about the earbud 420 and a respective longitudinal position between the top and botom ends 642 and 464.
- the sensors 468, 470, and 472 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal.
- the sensors 468, 470, and 472 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor.
- EEG electroencephalography
- EOG electrooculography
- EMG electromyography
- the wearable sensing device 460 may collect signals indicative of a user’s facial movements using the sensors 468, 470, and 472, while being comfortably worn by the user for example.
- the sensors 468, 470, and 472 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal.
- the sensors 468 may be active sensors and the sensors 470 may be ground sensors while the sensors 472 are reference sensors.
- the sensors 468, 470, and 472 do not have a predefined function and their function is dynamically assigned using the method described above for example.
- FIG 8 there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 480 is illustrated.
- the earbud 480 extends longitudinally between atop portion or end 482 and a botom portion or end 484 and an outer wall 486 extends between the top and botom portions 482 and 484.
- the cross-section of the wall 486 taken at any position along the longitudinal axis is provided with a substantially circular shape.
- the outer wall 486 faces and at least partially engages the ear canal of the user when the wearable sensing device 480 is worn by the user.
- a plurality of sensors/electrodes 488, 490, and 492 are mounted on the external face of the wall 486 so as to engage with the ear canal when the earbud 480 is worn by the user.
- the sensors 488, 490, and 492 each have a circular shape, extend along a circumference of the earbud 480 at a respective longitudinal position between the top and bottom ends 482 and 484.
- the sensors 488, 490, and 492 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal.
- the sensors 488, 490, and 492 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor.
- EEG electroencephalography
- EEG electrooculography
- EMG electromyography
- the wearable sensing device 480 may collect signals indicative of a user’s facial movements using the sensors 488, 490, and 492, while being comfortably worn by the user for
- the sensors 488, 490, and 492 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal.
- the sensors 488 may be active sensors and the sensors 490 may be ground sensors while the sensors 492 are reference sensors.
- the sensors 488, 490, and 492 do not have a predefined function and their function is dynamically assigned using the method described above for example.
- FIG. 9 there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 493 is illustrated.
- the earbud 493 extends longitudinally between a top portion or end 494 and a bottom portion or end 495 and an outer wall 496 extends between the top and bottom portions 494 and 495.
- the cross-section of the wall 496 taken at any position along the longitudinal axis is provided with a substantially circular shape.
- the outer wall 496 faces and at least partially engages the ear canal of the user when the wearable sensing device 493 is worn by the user.
- a plurality of sensors/electrodes 497, 498, and 499 are mounted on the external face of the wall 496 so as to engage with the ear canal when the earbud 493 is worn by the user.
- the sensors 497, 498, and 499 each have a circular shape, extend along a circumference of the earbud 493 at a respective longitudinal position between the top and bottom ends 494 and 495.
- the sensors 497, 498, and 499 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal.
- the sensors 497, 498, and 499 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor.
- EEG electroencephalography
- EEG electrooculography
- EMG electromyography
- the wearable sensing device 493 may collect signals indicative of a user’s facial movements using the sensors 497, 498, and 499, while being comfortably worn by the
- the sensors 497, 498, and 499 each have a width that is larger than the width of the sensors 488, 490, and 492.
- the sensors 497, 498, and 499 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal.
- the sensors 497 may be active sensors and the sensors 498 may be ground sensors while the sensors 499 are reference sensors.
- the sensors 497, 498, and 499 do not have a predefined function and their function is dynamically assigned using the method described above for example.
- FIG. 10A and 10B there is illustrated an exemplary wearable sensing device 500 having a shape comparable to that of a pair of headphones that can be worn over ears.
- the wearable sensing device 500 includes earcups 504 and 508 for engaging ears of a user.
- the earcups 504 and 508 include inner edges 501.
- the inner edges 501 face the ears of the user when the wearable sensing device 400 is worn by the user.
- the wearable sensing device 500 may include a single earcup instead of two.
- the wearable sensing device 500 includes a plurality of sensors/electrodes, such as one or more active sensors 502 on the inner edges 501 of the earcups 504, 508, one or more reference sensors and/or ground sensors 506 on one earcup 504 and 508.
- one or more reference sensors and/or ground sensors 506 are located on a mastoid bone of the user and in front of the ear, aligned with the eyes of the user.
- Such arrangement provides a direct physical contact of the ground sensors and/or reference sensors 506 with the skin of user.
- Direct physical contact of the ground sensors and/or reference sensors with the skin of a user has several benefits for example improved signal-to-noise ratio (SNR).
- SNR signal-to-noise ratio
- the plurality of sensors 502 and 506 are configured to measure active biosignals, ground biosignals and reference biosignals.
- the wearable sensing device 500 may collect signals indicative of a user’s facial movements using the sensors 502 and 506, while being comfortably worn by the user.
- the wearable sensing device 500 may include a support body 510 (for example, headband) coupled to the two earcups 504 and 508 for holding the two earcups 504 and 508 over the ears of the user when the wearable sensing device 500 is worn by the user.
- the support body 510 engages a head portion of the user when the wearable sensing device 500 is worn by the user.
- the support body 510 may comprise an inner surface 512.
- the wearable sensing device 500 may include one or more sensors 514 on the inner surface 512 for engaging a skull of the user when the wearable sensing device 500 is worn by the user.
- at least some of the sensors 514 may include ground sensors and/or reference sensors to measure additional ground biosignals and/or reference biosignals.
- at least some of the sensors 514 may include active sensors to additional active biosignals.
- the sensors 502, 506 and 514 do not have a predefined function and their function is dynamically assigned using the method described above for example.
- FIG. 11A-11C there is illustrated an exemplary wearable sensing device 600 having a shape comparable to that of a virtual/augmented reality gear.
- the wearable sensing device 600 comprises a head mounted display 602 and support bodies 608 and 612.
- the head mounted display 602 engages with a portion of the face of the user.
- the head mounted display 602 comprises an inner edge 604 facing the portion of the face of the user when the wearable sensing device 600 is worn by the user.
- the wearable sensing device 600 includes a plurality of sensors/electrodes 606 on the inner edges 604 of the head mounted display 602.
- the plurality of sensors 606 may include active sensors and one or more reference sensors and ground sensors. When the wearable sensing device 600 is worn by the user, the plurality of sensors 606 are located on the portion of the face of the user.
- Such arrangement provides a direct physical contact of the plurality of sensors 606 with the skin of the user.
- a direct physical contact of the ground sensors and/or reference sensors with the skin of user has several benefits, for example, improved SNR.
- the portion of the face of the user provides similar benefits to the wearable sensing device 600.
- the plurality of sensors 502 and 506 are configured to measure active biosignals, ground biosignals and reference biosignals.
- the wearable sensing device 500 may collect signals indicative of a user’s facial movements using the sensors 502 and 506, while being comfortably worn by the user.
- the wearable sensing device 600 may include support bodies 608 and 612 (for example, headbands) coupled to the head mounted display 602 for engaging a head portion of the subject and for holding head mounted display 602 over a portion of the face of the user when the wearable sensing device 600 is worn by the user.
- support bodies 608 and 612 for example, headbands
- the support bodies 608 and 612 may comprise inner surfaces 610 and 614.
- the wearable sensing device 600 may include one or more sensors 616 on the inner surface 610 and one or more sensors 618 on the inner surface 614 for engaging a skull of the user when the wearable sensing device 600 is worn by the user.
- at least some of the sensors 616 and 618 may include ground sensors and/or reference sensors to measure additional ground biosignals and/or reference biosignals.
- at least some of the sensors 616 and 618 may include active sensors to additional active biosignals.
- the sensors may be of any suitable type.
- the sensors may be able to detect signals even if they are at some distance from the source of the signal, movement, or muscle group being detected.
- signals relating to the movement of facial muscles may be detected by sensors located near the ear of the wearer.
- the wearable sensing devices 300, 400, 500, and 600 may have integrated housings to house electronic elements.
- the housing within the wearable sensing devices 300, 400, 500, and 600 may be configured to accommodate components such as a processor, memory, or other electronic components. Additionally, the housing may include openings or ports for connectivity or other functional requirements. In some embodiments, the housing may further include batteries, communication modules, or any other electronic devices associated with the functionality of the wearable sensing devices 300, 400, 500, and 600.
- Figure 12 illustrates one embodiment of a method 700 for operating sensors to obtain biosignals of a subject.
- the method 700 allows for determining, amongst a plurality of sensors configured to measure biosignals, which sensor(s) can be used as a reference and/or ground sensor and therefore which other sensor(s) can be used as an active sensor.
- impedance values of a plurality of sensors each located at a respective position on a subject are received.
- the impedance values are received from a plurality of sensors configured to measure biosignals such as a plurality of biopotential sensors.
- the impedance values are received from impedance sensors each located adjacent to a respective sensor configured to measure a biosignal. In this case, it will be understood that each impedance sensor and its respective biosignal sensor are considered as being located at the same location on the subject.
- the plurality of sensors may be included in the wearable sensing devices, such as 300, 400, 500, and 600. The impedance values are received when the wearable sensing device, such as 300, 400, 500, and 600 are worn by a user.
- one or more current generators may be controlled for generating alternating currents at various positions on the body of the subject where a plurality of sensors are located.
- the one or more current generators may be included in the wearable sensing devices, such as 300, 400, 500, and 600.
- the value of the generated alternating current may have a known value.
- the known alternating current may be applied to the plurality of sensors via an electrode -skin interface.
- the application of the know alternating current may produce voltages across the plurality of sensors.
- one or more voltage sensors may be controlled for measuring the voltages across each one of the plurality of sensors.
- the impedance value across the given sensor is measured based on the voltage across the given sensor and the known alternating current.
- the impedance across the given sensor may be measured as:
- the lowest impedance value amongst the impedance values received at step 702 is determined.
- a given one of the plurality of sensors associated with the lowest impedance value is identified based on the position at which the lowest impedance has been measured, i.e., the given identified sensor corresponds to the sensor that is located at the same position at which the lowest impedance has been measured. It is to be noted that a lower impedance value represents a better direct physical contact of the given with the skin of the user as compared to other sensors. Therefore, the identification of the given sensor associated with the lowest impedance value assists in improving the quality of active biosignals at least in terms of SNR.
- the two lowest impedance values of the impedance values are determined at step 702. Based on the two lowest impedance values, two given sensors of the plurality of sensors are then identified based on the locations at which the two lowest impedance values have been measured.
- the identified given sensor is operated as a reference sensor and/or a ground sensor to obtain at least one of a ground and/or reference biosignal.
- at least another sensor other than the given sensor is operated as an active sensor to obtain an active biosignal.
- one of the two given sensors is operated as a ground sensor to obtain the ground biosignal and the other one of the two given sensors is operated as a reference sensor to obtain the reference biosignal.
- another sensor other than the two given sensors are operated as an active sensor to obtain the active biosignal.
- the method 800 may perform one or more functions based on the input signal, such as determining facial movements, determining an emotional or mental state of the wearer, or any other suitable determination.
- an input signal is received from one or more sensors. At least some of the one or more sensors may be disposed on a device such as the devices 300, 400, 500 described above.
- the input signal from the sensors may include one or more brain signals.
- the input signal from the sensors may include one or more biosignals that may be indicative of the movements of particular muscles or muscle groups of the wearer. In some embodiments, the input signal received does not include or require any intentional input from the wearer, and only requires detection of one or more movements or physical states of the wearer.
- the received input signal is processed to identify one or more gestures or movements of the user. This processing will be discussed below in further detail.
- the determined gestures or movements of the user are mapped to one or more actions to be taken.
- the action to be taken may depend on an electronic device currently being used by the user. For example, when the user is operating a computer mouse, the determined gestures or movements could be used to direct the movement of the cursor or operate the mouse buttons. In another example, if a user is in an electric wheelchair, the determined gestures or movements could be used to direct the movement of the wheelchair. In other situations, the determined gestures or movements could be used to operate one or more devices or appliances, such as lights, televisions, or speakers, or other internet of things (loT) devices that are connectable to a communications network.
- the mapping between gestures and actions may be preprogrammed, or may be configurable by the user.
- a command is transmitted to the appropriate device to perform the one or more actions determined at 806.
- a method 900 of processing a received signal to identify one or more gestures or movements of the user will be described.
- the method 900 may correspond to step 804 of Figure 13.
- Training data from human participants is collected at 902. This step may optionally be omitted, for example if training data has previously been collected or is otherwise provided, or if the Al has already been trained, in which case the method 900 begins at step 808.
- the training data is collected using a standardized protocol.
- Each participant wears a training device with a specified number and configuration of sensors, which may correspond to the arrangement of sensors on the user device.
- the participant is asked to execute specific gestures, for example by displaying instructions on a screen facing the participant.
- the biopotential signals and/or sensor data are recorded by the device, and associated with the known gestures that were performed. This process may be repeated with multiple participants, for example to ensure a large amount of data and account for interpersonal variability in the observed signals.
- the intended user of a particular device may optionally be used as the training participant to obtain training data for a particular device, so that the device is adapted to the signals generated by the user while performing the target gestures.
- the training data is preprocessed using standard techniques known to persons skilled in the art.
- the generated dataset is then divided into time windows for each time segment, at predetermined intervals.
- the Al model is trained, using the preprocessed data, to identify the gestures.
- the Al model may include a transformer. In preparation for the transformer, each time window of data may be divided into a predetermined number of tokens corresponding to the number of time points to be encoded by the transformer block. The windows are transposed and passed to another transformer block, where the tokens are mapped to a higher dimension, where positional encoding can be added to each input.
- Hyper parameters such as the number of layers of the transformer, dimension of the tokens, and others, may be tuned by applying Grid Search or Bayesian optimization.
- Each transformer block is assigned a randomly initialized token, and its parameters are updated using gradient descent.
- a loss function is applied, such as cross entropy loss for classification, and mean squared error or mean absolute error for regression.
- a cross user train-test split can be used to train and test the Al model.
- an Al model such as a deep neural network (NN), a convolutional NN (CNN), a long short-term memory (LSTM) NN, or other temporal-based architecture, or a combination of these models, may be used.
- NN deep neural network
- CNN convolutional NN
- LSTM long short-term memory
- hyperparameters such as depth, kernel size, dilation rate, fdters, and others may be tuned to optimize the system for speed and accuracy.
- the time windows are used as the inputs.
- a temporal convolution is applied to each channel of inputs, followed by a residual connection after each layer.
- a sliding window can be applied to the data, with each window frame containing either a target signal or an absence of a target signal.
- Models for target signals may be obtained by using training data based on test subjects who are asked to perform different movements or gestures while wearing devices equipped with similar sensors. Thus, a degree of variation in the target signal can also be determined based on the training data and used to identify target signals that are not identical to the training data, for example due to interpersonal variations or noise.
- the Al may use machine learning based on the detected signals to continue to adapt to a particular user over time, which may provide increasingly accurate determinations.
- the trained Al may be used to identify gestures made by a wearer of the device, based on signals received from the sensors.
- the Al may be able to identify different types of gestures or movements, such as one or more of: eye movements, such as saccades, single winks, or intentional and unintentional dual blinks; facial movements, such as teeth clenching, tongue movements, frowning, smiling, nose twitching, or other motions and expressions.
- eye movements such as saccades, single winks, or intentional and unintentional dual blinks
- facial movements such as teeth clenching, tongue movements, frowning, smiling, nose twitching, or other motions and expressions.
- multiple signals can be detected in parallel, which may permit an understanding of the intent and state of the user.
- a user is mobility impaired, for example if the user does not have control of all of the muscles that would be detected by the sensors, the muscles and muscle groups that they have control over can be identified, and then solutions can be adapted specifically to their needs. This may include modifying sensor placement, sensor position, sensor material, and sensor geometry. Additionally or alternatively, the sensor inputs may be used to detect other states, for example emotional states, such as anxiety, stress, relaxation, and others; mental states, such as distraction, focus, or seizure states; or body states, such as sitting, walking, or running.
- the device may be configured to map the actions to be performed at 808 to movements or states that the user is capable of generating and the device is capable of detecting and identifying.
- Additional training of the Al model may be required or desired to ensure that the device can accurately identify the movements and gestures of the user.
- Additional body-mounted or body-pointed sensors may also be provided, to produce additional signal inputs that may assist in making these or additional determinations when combined with the sensors described above.
- the additional sensors may provide additional information about the activity, stance, posture, or other movement of the user to assist the Al model in making determinations.
- movements of specific muscle groups within the user’s control can be mapped to different commands in order to control devices.
- a paralyzed user if a paralyzed user is only able to rotate his head and blink, he could steer a wheelchair by indicating a direction using his head, and use blinks to start or stop.
- Other mappings of gestures to actions may alternatively be used.
- a user can control a cursor and operate a virtual mouse in order to make selections on a screen, enabling the user to communicate or type without the need of assistance or other specialized devices.
- This platform can be connected through BluetoothTM or wifiTM or other connection protocols to assistive devices, such as control switches, feeding robots, electric wheelchairs, and others, as well as to conventional electronic devices, including computers, phones, television sets, radio or speaker sets, and others.
- assistive devices such as control switches, feeding robots, electric wheelchairs, and others
- conventional electronic devices including computers, phones, television sets, radio or speaker sets, and others.
- the determination of emotion and physical states can also be used to create a virtual avatar of the user, mapping their real emotions, expressions, and actions to a digital representation.
- a direct physical contact of the ground sensors and/or reference sensors with the skin presents various benefits.
- a direct contact with the skin can be more effective in capturing accurate and reliable biosignals compared to non-contact or through-clothing sensors.
- Direct skin contact allows for a more efficient and stable interface between the ground sensors and/or reference sensors and the body of the user, resulting in improved signal quality for various biopotential measurements.
- Advantages of having the direct skin contact include reduced signal interference, enhanced signal amplitude, improved signal stability, better electrode-skin coupling, minimized environmental interference, improved signal -to-noise ratio (SNR).
- the computing device 100 comprising various hardware components including one or more single or multi-core processors collectively represented by processor 110, a graphics processing unit (GPU) 111, a solid-state drive 120, a random -access memory 130, a display interface 140, and an input/output interface 150.
- processor 110 a graphics processing unit (GPU) 111
- solid-state drive 120 a solid-state drive 120
- random -access memory 130 a random -access memory
- display interface 140 a display interface 140
- input/output interface 150 input/output interface
- Communication between the various components of the computing device 100 may be enabled by one or more internal and/or external buses 160 (e.g., a PCI bus, universal serial bus, IEEE 1394 “Firewire” bus, SCSI bus, Serial-ATA bus, etc.), to which the various hardware components are electronically coupled.
- internal and/or external buses 160 e.g., a PCI bus, universal serial bus, IEEE 1394 “Firewire” bus, SCSI bus, Serial-ATA bus, etc.
- the input/output interface 150 may be coupled to a touchscreen 190 and/or to the one or more internal and/or external buses 160.
- the touchscreen 190 may be part of the display. In one or more implementations, the touchscreen 190 is the display. The touchscreen 190 may equally be referred to as a screen 190.
- the touchscreen 190 comprises touch hardware 194 (e.g., pressuresensitive cells embedded in a layer of a display allowing detection of a physical interaction between a user and the display) and a touch input/output controller 192 allowing communication with the display interface 140 and/or the one or more internal and/or external buses 160.
- the input/output interface 150 may be connected to a keyboard (not shown), a mouse (not shown) or a trackpad (not shown) allowing the user to interact with the computing device 100 in addition or in replacement of the touchscreen 190.
- the solid-state drive 120 stores program instructions suitable for being loaded into the random-access memory 130 and executed by the processor 110 and/or the GPU 111.
- the program instructions may be part of a library or an application.
- the computing device 100 may be implemented as a server, a desktop computer, a laptop computer, a tablet, a smartphone, a personal digital assistant or any device that may be configured to implement the present technology, as it may be understood by a person skilled in the art.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- General Physics & Mathematics (AREA)
- Otolaryngology (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- Dermatology (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
A method of identifying a facial gesture performed by a user includes: receiving at least one signal input from at least one sensor in contact with a head of the user; identifying at least one gesture of the user corresponding with the at least one received signal; mapping the at least one identified gesture to at least one action to be taken; and transmitting command to an electronic device to perform the action. A method of training a neural network (NN) for identifying a facial gesture includes: collecting training data from at least one performing one or more predetermined facial gestures; preprocessing the training data; and providing the preprocessed training data as an input to the NN.
Description
METHOD AND SYSTEM FOR DETERMINING FACIAL MOVEMENTS
TECHNICAL FIELD
[0001] The invention relates to determining facial movements, and more specifically to wearable systems and methods for determining facial movements.
BACKGROUND OF THE ART
[0002] Facial movements, including eye movements, are generated as a result of messages sent from the brain to the muscles of the face. The messages are sent in the form of electrochemical impulses from the motor cortex of the brain, and communicated by nerves to activate the appropriate muscle or muscle group. The electrochemical signals may have a detectable signature, and the activation of the muscle may itself have a detectable signature. These signatures can be detected, for example by biopotential sensors placed on the skin. Signals corresponding to different facial movements can be differentiated by the nature and location of the signals, which may permit a determination of which facial movement has occurred.
[0003] There have been attempts to use head-mounted sensors and other devices to detect and track facial movements, using different combinations of head-mounted sensors, cameras, and biopotential sensors. Head-mounted “wearable” devices with built-in sensors have been developed. Some of these systems have drawbacks, such as being bulky or inconvenient to wear, or having one or more cameras in the wearer’s line of sight that obstruct the wearer’s vision. In addition, some types of sensors, particularly those used to gather data for input to an artificial intelligence (Al), use gel or wet electrodes to improve the signal quality, which may be unpleasant or otherwise undesirable for the user.
[0004] There is a desire for an improved wearable sensing device for facial movement recognition.
SUMMARY
[0005] It is an object of the present disclosure to ameliorate at least one drawback of the prior art.
[0006] It is an object of the present disclosure to provide a wearable sensing device having improved capability of detecting and distinguishing facial movements.
[0007] It is an object of the present disclosure to provide a wearable sensing device for detecting facial movements that does not obstruct a view of the wearer.
[0008] It is an object of the present disclosure to provide a wearable sensing device for detecting facial movements that is convenient and unobtrusive to wear.
[0009] It is an object of the present disclosure to provide a system and method for detecting and distinguishing facial movements based on received sensor data.
[0010] According to a first broad aspect, a method of identifying a facial gesture performed by a user includes: receiving at least one signal input from at least one sensor in contact with a head of the user; identifying at least one gesture of the user corresponding with the at least one received signal; mapping the at least one identified gesture to at least one action to be taken; and transmitting command to an electronic device to perform the action.
[0011] In some embodiments, identifying the at least one gesture is performed by a neural network (NN).
[0012] In some embodiments, the NN comprises a transformer.
[0013] In some embodiments, the neural network comprises at least one of: a deep NN, a convolutional NN (CNN), and a long short-term memory (LSTM) NN.
[0014] In some embodiments, the at least one gesture comprises at least one of: a blink; a wink; a jaw movement; and a head movement.
[0015] In some embodiments, the at least one sensor is included in a wearable sensing device.
[0016] In some embodiments, the at least one sensor is disposed near or in an ear of the user.
[0017] In some embodiments, the at least one sensor comprises at least one of: an electroencephalography (EEG) sensor; an electrooculography (EOG) sensor; and an electromyography (EMG) sensor.
[0018] In some embodiments, the sensor is capable of detecting an electrochemical or biopotential signal.
[0019] In some embodiments, the sensor comprises one or more of: a conductive polymer; a conductive filler material; carbon nanotubes; and silver nanoparticles.
[0020] In some embodiments, the at least one sensor is configured to detect a specific facial gesture of the user.
[0021] According to a second broad aspect, a system of identifying a facial gesture performed by a user, the system comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer-readable instructions; the processor, upon executing the instructions, being configured for: receiving at least one signal input from at least one sensor in contact with a head of the user; identifying at least one gesture of the user corresponding with the at least one received signal; mapping the at least one identified gesture to at least one action to be taken; and transmitting command to an electronic device to perform the action.
[0022] In some embodiments, identifying the at least one gesture is performed by a neural network (NN).
[0023] In some embodiments, the NN comprises a transformer.
[0024] In some embodiments, the neural network comprises at least one of: a deep NN, a convolutional NN (CNN), and a long short-term memory (LSTM) NN.
[0025] In some embodiments, the at least one gesture comprises at least one of: a blink; a wink; a jaw movement; and a head movement.
[0026] In some embodiments, the at least one sensor is included in a wearable sensing device.
[0027] In some embodiments, the at least one sensor is disposed near or in an ear of the user.
[0028] In some embodiments, the at least one sensor comprises at least one of: an electroencephalography (EEG) sensor; an electrooculography (EOG) sensor; and an electromyography (EMG) sensor.
[0029] In some embodiments, the sensor is capable of detecting an electrochemical or biopotential signal.
[0030] In some embodiments, the sensor comprises one or more of: a conductive polymer; a conductive filler material; carbon nanotubes; and silver nanoparticles.
[0031] In some embodiments, the at least one sensor is configured to detect a specific facial gesture of the user.
[0032] According to a third broad aspect, a method of training a neural network (NN) for identifying a facial gesture includes: collecting training data from at least one performing one or more predetermined facial gestures; preprocessing the training data; and providing the preprocessed training data as an input to the NN.
[0033] In some embodiments, the training data comprises sensor data from one or more sensors of at least one wearable sensing device worn by the at least one user.
[0034] In some embodiments, the at least one user is an intended user of the wearable sensing device.
[0035] In some embodiments, preprocessing the training data comprises using at least one transformer to determine at least one temporal parameter of the training data.
[0036] In some embodiments, preprocessing the training data comprises dividing the training data into a plurality of time windows.
[0037] According to a fourth broad aspect, system of training a neural network (NN) for identifying a facial gesture, the system comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer-readable instructions; the processor, upon executing the instructions,
being configured for: collecting training data from at least one performing one or more predetermined facial gestures; preprocessing the training data; and providing the preprocessed training data as an input to the NN.
[0038] In some embodiments, the training data comprises sensor data from one or more sensors of at least one wearable sensing device worn by at least one user.
[0039] In some embodiments, the at least one user is an intended user of the wearable sensing device.
[0040] In some embodiments, preprocessing the training data comprises using at least one transformer to determine at least one temporal parameter of the training data.
[0041] In some embodiments, preprocessing the training data comprises dividing the training data into a plurality of time windows.
[0042] According to a fifth broad aspect, a wearable sensing device comprising: an elongated body extending between a first end and a second end and comprising a body inner face, the body inner face facing a subject when the wearable sensing device is worn by the subject; a first temple arm extending from the first end of the elongated body and a second temple arm extending from the second end of the elongated body, the first temple arm and the second temple arm each for engaging a respective ear of the subject; a bridge body mounted to the elongated body, the bridge body being configured for engaging a nose of the subject and comprising a first arm and a second arm each for abutting a respective side of the nose, each one of the first arm and the second arm comprising a respective arm inner face for engagement with the nose of the subject; at least one first sensor mounted to the inner face of the elongated body so as to engage a forehead of the subject when the wearable sensing device is worn by the subject, the at least one first sensor for measuring at least one of a ground biosignal and a reference biosignal; and at least one second sensor positioned on the respective arm inner face of at least one of the first arm and the second arm so as to engage a nasal bone of the nose when the wearable sensing device is worn by the subject, the at least second sensor for measuring at least one active biosignal.
[0043] In some embodiments, the at least one first sensor comprises a reference sensor and a ground sensor each mounted to the inner face of the elongated body so as to engage a forehead of the subject when the wearable sensing device is worn by the subject.
[0044] In some embodiments, the elongated body comprises a protrusion projecting from the inner face thereof, the at least one first sensor being mounted to the protrusion.
[0045] In some embodiments, the protrusion is provided with a curved shape.
[0046] In some embodiments, the protrusion is located substantially in the middle of the elongated body.
[0047] In some embodiments, wearable sensing device further comprising at least one third sensor mounted to at least one of the first temple arm and the second temple arm for engaging a skull of the subject when the wearable sensing device is worn by the subject, the at least one third sensor for measuring at least one of an additional ground biosignal and an additional reference biosignal.
[0048] In some embodiments, the at least one third sensor comprises two biosignal sensors each mounted to a respective one of the first temple and the second temple arm.
[0049] In some embodiments, the at least one first sensor, the at least one second sensor and the at least one third sensor each comprise at least one of an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and an electromyography (EMG) sensor.
[0050] In some embodiments, the first temple arm and the second temple arm are each rotatably mounted to the elongated body.
[0051] According to a fifth broad aspect, a method for operating sensors to obtain biosignals, the method comprising: receiving impedance values of a plurality of sensors, each measured at a respective position on a body of a subject, the plurality of sensors being located at the respective position on the body of the subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality
of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
[0052] In some embodiments, said determining a lowest impedance value comprises determining two lowest impedance values and said identifying a given one of the plurality of sensors comprises identifying two given ones of the plurality of sensors.
[0053] In some embodiments, said operating the given one of the plurality of sensors comprises operating a first one of the two given ones as a ground sensor to obtain the ground biosignal, and a second one of two given ones as a reference to obtain the reference biosignal.
[0054] In some embodiments, the method further comprising: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
[0055] According to a sixth broad aspect, a system for operating sensors to obtain biosignals, the system comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer- readable instructions; the processor, upon executing the instructions, being configured for: receiving impedance values of a plurality of sensors, each measured at a respective position on a body of a subject, the plurality of sensors being located at the respective position on the body of the subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
[0056] In some embodiments, the processor is configured for determining two lowest impedance values and said identifying a given one of the plurality of sensors comprises identifying two given ones of the plurality of sensors
[0057] In some embodiments, the processor is configured for operating a first one of the two given ones as a ground sensor to obtain the ground biosignal, and operating a second one of two given ones as a reference sensor to obtain the reference biosignal.
[0058] In some embodiments, the processor is further configured for: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
[0059] According to a seventh broad aspect, a system for obtaining biosignals, the system comprising: a wearable sensing device mountable to a head of a user and comprising a plurality of sensors each to be located at a respective position on the head when the wearable sensing device is worn by the user; a processor; a non-transitory storage medium operatively connected to the processor, the non-transitory storage medium comprising computer- readable instructions; the processor, upon executing the instructions, being configured for: receiving impedance values of the plurality of sensors, each measured at a respective position on a body of a subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
[0060] In some embodiments, the wearable sensing device comprises: at least one earcup for engaging an ear of a subject and comprising an inner edge, the inner edge facing the ear of the subject when the wearable sensing device is worn by the subject; a support body coupled to the at least one earcup for holding the at least one earcup over the ear of the subject when the wearable sensing device is worn by the subject; and wherein the plurality of sensors are located on the inner edge of the earcup.
[0061] In some embodiments, the at least one earcup comprises two earcups.
[0062] In some embodiments, when the wearable sensing device is worn by the subject, at least one sensor of the plurality of sensors is located on a mastoid bone of the subject.
[0063] In some embodiments, when the wearable sensing device is worn by the subject, at least one of the plurality of sensors is located in front of the ear, aligned with an eye of the subject.
[0064] In some embodiments, the support body engages a head portion of the subject when the wearable sensing device is worn by the subject and comprises an inner surface, at least one of the plurality of sensors being located on the inner surface of the support body.
[0065] In some embodiments, the wearable sensing device comprises: at least one earpiece comprising a earbud configured for insertion in an ear canal of a subject, the earbud comprising an outer portion configured for facing and at least partially engaging the ear canal of the subject when the wearable sensing device is worn by the subject, wherein the plurality of sensors are located on the outer portion of the earbud.
[0066] In some embodiments, the wearable sensing device comprises: at least one earpiece comprising a earbud configured for insertion in an ear canal of a subject, the earbud comprising a top portion, a bottom portion opposite to the top portion and an outer wall extending between the top portion and the bottom portion, the outer wall configured for facing and at least partially engaging the ear canal of the user when the wearable sensing device is worn by the user, wherein the plurality of sensors are mounted on the outer wall of the earbud so as to engage with the ear canal when the earbud is worn by the user.
[0067] In some embodiments, the plurality of sensors extend longitudinally on the outer wall between the top portion and the bottom portion.
[0068] In some embodiments, the plurality of sensors have a respective radial position about the outer wall.
[0069] In some embodiments, the plurality of sensors have a dot shape.
[0070] In some embodiments, the plurality of sensors have a respective radial position about the outer wall and a longitudinal position between the top portion and the bottom portion.
[0071] In some embodiments, the plurality of sensors extend along a circumference of the outer wall between the top portion and the bottom portion.
[0072] In some embodiments, the plurality of sensors have a respective longitudinal position between the top portion and the bottom portion.
[0073] In some embodiments, the plurality of sensors have a circular shape.
[0074] In some embodiments, the wearable sensing device comprises: a head mounted display for engaging a portion of face of a subject and comprising an inner edge, the inner edge facing the portion of face of the subject when the wearable sensing device is worn by the subject; a support body coupled to the head mounted display for engaging a head portion of the subject and for holding the head mounting display over the portion of face of the subject when the wearable sensing device is worn by the subject and comprising and inner surface; and wherein the plurality of sensors are located on at least one of the inner edges of the head mounted display and the inner surface of the support body.
[0075] In some embodiments, the processor is further configured for: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
[0076] In some embodiments, each one of the plurality of sensors comprises one of an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor, and an electromyography (EMG) sensor.
[0077] It is to be noted that the terms "sensors" and "electrodes" can be used interchangeably. While the term "sensor" is employed more frequently throughout the disclosure, it is contemplated that this choice of terminology is not meant to limit the scope of the present disclosure. Both sensors and electrodes encompass the devices or components, which are designed to detect and/or measure biosignals by interacting with at least a body part of a user.
BRIEF DESCRIPTION OF THE DRAWINGS
[0078] Having thus generally described the nature of the invention, reference will now be made to the accompanying drawings, showing by way of illustration example embodiments thereof and in which:
[0079] FIG. 1 is a block diagram of an electronic device, according to various embodiments of the present disclosure;
[0080] FIG. 2 is a schematic diagram of the face muscles of a human;
[0081] FIG. 3 illustrates a wearable sensing device having a shape comparable to that of a pair of glasses, according to various embodiments of the present disclosure;
[0082] FIG. 4 illustrates a wearable sensing device comprising a pair of earpieces, according to various embodiments of the present disclosure;
[0083] FIG. 5 illustrates a first exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure;
[0084] FIG. 6 illustrates a second exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure;
[0085] FIG. 7 illustrates a third exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure;
[0086] FIG. 8 illustrates a first exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure;
[0087] FIG. 9 illustrates a first exemplary earbud that is part of a wearable sensing device having the shape of earpiece, according to various embodiments of the present disclosure;
[0088] FIG. 10A illustrates a wearable sensing device having a shape comparable to that of a pair of headphones comprising earcups, according to various embodiments of the present disclosure;
[0089] FIG. 10B illustrates the earcups of the wearable sensing device of FIG. 10A;
[0090] FIG. 11A illustrates a wearable sensing device having a shape comparable to that of a virtual/augmented reality gear, according to various embodiments of the present disclosure;
[0091] FIGs. 11B-11C illustrate a support body that is coupled to the wearable sensing device of FIG. 11 A, according to various embodiments of the present disclosure;
[0092] FIG. 12 illustrates a flowchart of a method of operating sensors to obtain biosignals, according to various embodiments of the present disclosure;
[0093] FIG. 13 illustrates a flowchart of a method of operating a wearable sensing device, according to various embodiments of the present disclosure;
[0094] FIG. 14 illustrates a flowchart of a method of identifying a gesture or movement of a user, according to various embodiments of the present disclosure; and
[0095] FIG. 15 illustrates a block diagram of a computing device, according to various embodiments of the present disclosure.
DETAILED DESCRIPTION
[0096] According to some embodiments, described herein is a method, system, and apparatus for detecting and differentiating between the different signals that are sent from the brain to create different movements. The signals may be detected by sensors/electrodes positioned in one or more locations on the head of the wearer, for example in and around the ears, on the bridge of the nose, or in any other suitable locations. These sensors can detect one or more of eye movements, facial movements, or mouth and tongue movements. Furthermore, because of the effect of regional movements produced by other areas of the body, gross movements from other regions of the body can be sensed from the head by interpreting the shadow of their signals. As a result, in some embodiments the received signals may allow the determination and identification of noise artifacts produced by different parts of the body, including step tracking, head tracking, speech decoding, and others.
[0097] For example, noise artifacts from body movements can be observed in the electroencephalography (EEG) signals from the head. Though these signals can look similar, it is possible to differentiate the movements by applying novel approaches in artificial intelligence (Al) capable of decoding these neural signals in real-time, based on artifacts that can be identified in the action potential transmission of ionic currents. The sensors can optionally be tuned to optimize their mechanical, electrical, and chemical properties to target
specific muscles or muscle groups in order to more effectively identify them. Thus, by decoding the actions that procured these artifacts, or signals, the facial movements of a user can be determined by sensors placed in or around the ear, or elsewhere on the head. By modifying the approach of collecting and decoding these signals, the determinations of these sensors can be optimized to detect the movements more efficiently, from which a variety of useful commands can be created.
[0098] As can be appreciated, facial movement detection has applications for humancomputer interaction. By understanding the movements of humans from a few discreet locations in and around the ear and across the head, a seamless method can be provided for a user to interact with devices hands-free and voice-free, by using subtle facial movements, such as eye movement or blinking, to generate commands for an electronic device.
[0099] The technology described herein may be applied in assistive technologies. People with movement disabilities, such as paralysis, paraplegia, or quadriplegia, may only have control of a few muscles in the fingers, eyes, or face, and thus are limited in their daily activities and independence. These people may rely on mouth sticks or switches to engage with devices; however, the embodiments disclosed herein provide a method for those with such disabilities to interact with their devices or connected environments, such as virtual keyboards, electric wheelchairs, feeding machines, home assistance devices, and others, using just the signals produced from their brain, as well as the limited muscles over which they still have control.
[00100] Some embodiments disclosed herein may be used for determining the emotional and physical state of a user for medical or other purposes.
[00101] Some embodiments disclosed herein may be used for the creation of digital avatars that reflect not only the physical activity of their real counterparts, but also their expression, attention, and emotion. Thus, in some embodiments a system detects and maps signals from the brain and body to actionable insights to be used by devices.
[00102] To be appealing for consumer applications, sensor locations should be considered that would be acceptable by consumers for daily use. These include locations generally accepted to be in contact with existing devices that are worn on the head. However, some existing headsets require attachment to the user's face, which may cause visual occlusion.
Some ear-based sensing techniques have been proposed to address this problem. For example, Gruebler, A.et al., (2014), Design of a Wearable sensing device for Reading Positive Expressions from Facial EMG Signals, IEEE Transactions on Affective Computing, 5(3), 227-237, (“Gruebler”) describes a facial motion detector positioned around the ear with built-in EMG electrodes. Gruebler discloses that EMG electrodes, even sitting on less mobile locations of the face (e.g. the side of the face), can detect signature patterns related to certain facial expressions. However, this approach may result in lower accuracy due to the increased distance between the sensors and the source of the signal on the face.
[00103] One alternative to tracking facial movements is known as “hearable” devices, which combine conventional sound-listening earphones with various biosensing systems. For example, Manabe, H. et al., (2013), Conductive rubber electrodes for earphone-based eye gesture input interface, Proceedings of the 17th Annual International Symposium on International Symposium on Wearable Computers - ISWC ’ 13, 33, (“Manabe”) describes earphones containing electrooculography (EOG) electrodes made of conductive rubber to track eye movement from inside the ear. Ando, T. et al., (2017), CanalSense: Face-Related Movement Recognition System based on Sensing Air Pressure in Ear Canals, Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 679-689, (“Ando”) describes using barometers embedded in earphones to capture characteristic changes caused by face-related movements (e.g. opening or closing the mouth). Matthies, D. J. C., et al., (2017), EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 1911-1922, (“Matthies”) describes recognizing facial expressions based on electric field changes in the ear canal. Taniguchi, K. et al., (2018), Earable TEMPO: A Novel, Hands-Free Input Device that Uses the Movement of the Tongue Measured with a Wearable Ear Sensor, Sensors, 18(3), 733, (“Taniguchi”) describes using earphone-type photo sensors with infrared light emitting diodes (LEDs) and phototransistors to recognize tongue movements, allowing the users to control a music player by pushing the tongue against the roof of the mouth. Amesaka, T. et al., (2019), Facial expression recognition using ear canal transfer function, Proceedings of the 23rd International Symposium on Wearable Computers, 1-9, (“Amesaka”) describes using earphones equipped with microphones to estimate facial movements by ear canal
transfer of the internal sound generated from the movement. Choi, S. et al., (2022), PPGface: Like What You Are Watching? Earphones Can “Feel” Your Facial Expressions, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(2), 1-32, (“Choi”) describes using photoplethysmogram (PPG) sensors to distinguish between different facial expressions in real -world settings. However, there is a desire for a device and method to distinguish between the subtle commands and facial movements produced by the face, such as blinking, winking, and others, with high fidelity by using sensors in and around the ears in combination with adaptable machine learning techniques for real-time, real-life applications.
[00104] Referring to Figure 1, a wearable sensing device 100 receives one or more sensor inputs 101. Examples of wearable sensing devices will be described below in further detail. The sensors/electrodes may be one or more of the following types: EEG sensors located in or around the ear; EOG sensors located in or around the ear; electromyography (EMG) sensors located in or around the ear; inertial sensors such as gyroscopes or accelerometers; directional or omnidirectional microphones mounted to the device; or external microphones. Additional sensor types will be described below or may be apparent to persons skilled in the art. The sensor inputs 101 may be indicative of eye movements or other facial movements of the wearer of the wearable sensing device, as will be discussed below in further detail.
[00105] The detected signals may be electrochemical in nature, and propagated outward to the skin, where they can be noninvasively detected using biopotential sensors. Some conventional sensors use a conductive metal in conjunction with a silver-silver chloride solution which can be suitable for some medical applications. However, other types of sensors have been developed to be more suitable for commercial use, for example being easier to use and having lower maintenance. One example is a conductive polymer sensor, where one or more biocompatible polymer matrices, such as PDMS, TPU, or others, are combined with one or more conductive fdler materials, such as carbon nanotubes, silver nanoparticles of various shapes, or others. Biopotential sensors and other biosensors capture the signals and send them to specialized hardware and software, where they are processed to determine the intended signal or movement using proprietary algorithmic and artificial intelligence methods.
[00106] Individual sensors, or sets of sensors, may be optimized to detect specific signals. When signals are sent through the nervous system to affect different areas or groups of the human body, specific sensors and receptors are used, and thus, a unique electrochemical signature is used to command different muscles or muscle groups. The ionic conduction of these signals through the body, which also propagate through outward to the skin, can be detected by ionic sensors placed on the surface of the skin. These ionic sensors can be tuned to detect the unique electrochemistry of the action potentials, or biopotentials, generated by these specific movements. Thus, one or more ionic sensors can be capable of differentiating the unique signals detected at the same locations corresponding to different muscle movements. In general, the electrical properties of the sensor can be tuned to become more sensitive to the target signal by manipulating its composition. The sensors may be chosen to be dry, wet, or semi-wet sensors, and their composition may be chosen to include one or more of silver/silver-chloride, gels, hydrogels, polymers, conductive elements or fillers, conductive coatings, or other materials, depending on the particular signal or signals to which the sensors should be sensitive.
[00107] In one example, a sensor for detecting brain activity may be made with a conductive polymer material. A suitable sensor for detecting muscle activity may be made with a conductive polymer material, and may be designed or tuned (for example by selecting the conductive polymer material) to detect or be particularly sensitive to one or more particular muscles, muscle groups, or biosignals. The number, location, and geometry of the sensors may also be selected to have increased sensitivity to one or more particular muscles, muscle groups, or biosignals.
[00108] Other suitable types of sensors may include strain gauges, LED lights, IR lights, microphones, biopotential sensors, inertial measurement units, and temperature sensors. Additional suitable types of sensors may be known to persons skilled in the art.
[00109] It is contemplated that one or more brain sensors or one or more muscle sensors, may operate in combination, or have their received signals combined, for example to increase the accuracy of the sensor signals or to detect a greater number of signals.
[00110] In addition, sensors can be optimized for sensing signals at certain locations on the skin by optimizing their geometry. For example, sensors for in-ear applications can be
designed with an ear tip geometry, similar to the shape of commercial audio devices. For applications where hair is present, sensors can be designed with prongs that are long enough to penetrate through the hair to reach the skin. For applications with a unique contour of the skin, such as behind the ears or in the concha of the ear, sensors can be designed with specific armatures to provide consistent contact with the skin along the path of the arm.
[00111] In addition, the placement of sensors on the body, head, or face can be optimized. Although a sensor would typically detect a better quality signal by being placed closer to the source of the signal, this is not always convenient for the wearer of the device. For example, to detect mouth movements, sensors placed on the cheek or jaw might not be desirable for consumer applications. Thus, sensors for mouth movements can be placed in more discreet or unobtrusive locations, such as around the ear or across the scalp. These locations can be optimized for detecting the movement of specific muscle groups, for example by placing the sensor along a different portion of the target muscle group or along a nerve path leading to the target muscle. Thus, a higher quality signal may be detected, even while placing the sensors in a position more convenient to the wearer of the device.
[00112] The sensor inputs are processed, preferably in real-time, by one or more processors 102 which may perform operations including digital signal processing algorithms and machine learning. The processing will be described below in further detail. The processing generates one or more outputs 103, which can be used for a variety of different applications. The outputs 103 could include, but are not limited to, one or more of: blink detection, attention, attention direction, intention, and preset electronic commands that may be used to control an electronic device.
[00113] In some embodiments, the wearable sensing device 100 implements a method of understanding one or more of the eye movements, facial movements and expressions, emotional state, and mental state of a user using a combination of brain and biosignals, which can be collected via a single system or set of sensors without requiring any intentional input from the user. For example, referring to Figure 2, the sensors may detect movements of one or more of the muscle groups 201-208 of the facial musculature 200 of the user. In some embodiments, the materials used for the sensors can be specifically optimized to sense from a specific muscle, muscle group, or biosignal of interest. The brain sensors can operate in isolation, or in combination with other brain sensors, and/or in combination with other
bio-sensors, in a sensor fusion approach, to increase accuracy and/or ability to detect different signals. The sensors can be positioned in different configurations, such that the sensors are in locations that can be conveniently incorporated into headwom devices, and can be configured in locations that are distant from the sources of signal.
[00114] When the system is in use, signals or movements are produced by the body. One or more biopotential sensors or other biosensors capture a signal, and process the signal to determine one or more intended commands or movements using artificial intelligence methods that will be described below in further detail. These intended commands or movements are then mapped to different operations, based on the specific application engaged. Applications can range from assistive devices, to consumer electronics, to connected devices, to virtual avatars, and to other applications.
[00115] In some embodiments, the wearable sensing device 100 may provide a hands-free, voice-free method for users with movement disabilities or limited control over bodily movements to interact with and send commands to assistive, connected, or personal electronic devices. This may allow the user to operate a device to perform a desired function, by the deliberate use of a muscle or muscle group over which the user has control.
[00116] In some embodiments, the wearable sensing device 100 may permit a user to control an electronic device solely by way of the one or more sensors, thereby enabling electronic devices to be designed with fewer or no buttons, switches, or other hardware dedicated solely to the user interface, thereby potentially reducing costs and enabling innovative product designs or improved miniaturization.
[00117] Referring to Figure 3, there is illustrated an exemplary wearable sensing device 300 having a shape similar to that of a pair of glasses. The wearable sensing device 300 includes an elongated body 302, a bridge body 310, first and second temple arms 314 and 316 and a plurality of sensors/electrodes 324, 326, 328.
[00118] The elongated body 302 extends longitudinally between a first end 304 and a second end 306. In one embodiment, the elongated body 302 may extend substantially linearly between the first end 304 and the second end 306. In other embodiments, the elongated body 302 may have any suitable shape, such as the shape of a full-rim frame, a half-rim frame, a browline frame, a round frame, a rectangular frame, a square frame, or the
like. The manner in which the elongated body 302 is implemented should not limit the scope of the present disclosure.
[00119] The elongated body 302 comprises a top face 308-1, a bottom face 308-2 opposite to the top face 308-1, a front face 308-3 and an inner face 308-4 opposite to the front face 308-3. The inner face 308-4 faces the user’s face when the wearable sensing device 300 is worn by the user.
[00120] As described in greater detail below, the elongated body 302 is provided with at least one sensor 324 mounted on the inner face 308-4 thereof. The sensor 324 is mounted on the inner face 308-4 so as to engage or be in physical contact with the forehead of the user when the sensing device 300 is worn by the user. In some embodiments, the sensor 324 is located substantially at the middle of the length of the elongated body 302 so that the sensor 324 be substantially aligned with the nose of the user when the sensing device 300 is worn by the user. However, it should be understood that the sensor 324 may be located at any other adequate position on the inner face 308-4. In some embodiments, the elongated body 302 is provided with a plurality of sensors 324 each mounted at a respective position along the length of the inner face 308-4.
[00121] In some embodiments such as the illustrated embodiment, the elongated body 302 comprises a protrusion 322 projecting from the inner face 308-4 thereof and the sensor 324 is mounted to the protrusion 322 so as to engage or be in physical contact with the forehead of the user when the sensing device 300 is worn by the user. While in the illustrated embodiment, the protrusion 322 is located substantially at the middle of the length of the elongated body 302, it should be understood that the protrusion 322 with the sensor 324 mounted thereto can be located at any adequate position along the length of the inner face 308-4. Similarly, the elongated body 302 may be provided with more than one protrusion 322 each provided with a respective sensor 324.
[00122] The protrusion 322 is shaped and sized so as the sensor 324 mounted thereto will engage or be in physical contact with the forehead of the user when the sensing device 300 is worn by the user. The protrusion may ensure a better connection between the sensor 324 and the forehead of the user.
[00123] In some embodiments, the protrusion 322 includes a recess configured for receiving therein the sensor 324. In other embodiments, the protrusion 322 comprises a fastening mechanism to which the sensor 324 may be fastened. The manner in which the protrusion 322 facilitates holding of the sensors should not limit the scope of present disclosure.
[00124] In some embodiments, the protrusion 322 is integral with the elongated body 302. In other embodiments, the protrusion 322 may be attached or mounted to the elongated body 302 using any adequate securing method.
[00125] In some embodiments, the protrusion 322 is provided with a curved or rounded shape.
[00126] Referring back to Figure 3, the wearable sensing device 300 includes abridge body 310 mounted to the elongated body 302. The bridge body 310 is configured for engaging the nose of the user, i.e. engaging opposite sides of the nose of the user. In some embodiments including the illustrated embodiment, the bridge body 302 projects from the bottom face 308-2 of the elongated body 302. In some embodiments, the bridge body 310 may be mounted to the elongated body 302 using any suitable mechanism, for example integral molded connection, saddle bridge, screw connection, snap-fit connection, hinged connection, keyhole bridge, magnetic connection, adhesive bonding, or any other suitable technique.
[00127] The bridge body 310 comprises a first arm 312-1 and a second arm 312-2 each having an end mounted to the elongates body 302. The first arm 312-1 and the second arm 312-3 are each shaped and sized to abut a respective side of the nose of the user. Further, each one of the first arm 312-1 and the second arm 312-2 comprises an arm inner face 318 for engagement with its respective side of the nose of the user.
[00128] As described in greater detail below, the bridge body 310 is provided with at least one sensor 320 mounted on the arm inner face 318 of at least one of its arms 312-1 and 312- 2. The sensor 320 is mounted on the arm inner face 318 so as to engage or be in physical contact with a side of the user’s nose when the sensing device 300 is worn by the user. In some embodiments, the sensor 320 is located on the arm inner face 318 so as to face the nasal bone when the sensing device 300 is worn by the user. In some embodiments, the arm
inner face 318 of each arm 312-1 and 312-2 is provided with a respective sensor 320 so that the bridge body 310 be provided with at least two sensors 320.
[00129] The wearable sensing device 300 further comprises a first temple arm 314 and a second temple arm 316 which each extends longitudinally along a respective axis. The first temple arm 314 projects from the first end 304 of the elongated body 302 and the second temple arm 316 projects from the second end 306 of the elongated body 302. The first temple arm 314 and the second temple arm 316 are each shaped and sized so at to engage the top of a respective ear of the user when the wearable sensing device 300 is worn by the user.
[00130] In some embodiments, the first temple arm 314 and the second temple arm 316 may include hinges or flexible materials to allow for adjustment and a customized fit with respect to the elongated body 302. Hinges connecting the first temple arm 314 and the second temple arm 316 to the elongated body 302 may enable the first temple arm 314 and the second temple arm 316 to fold inward for compact storage. In other words, the hinges connecting the first temple arm 314 and the second temple arm 316 to the elongated body 302 may enable the first temple arm 314 and the second temple arm 316 to be rotatable with respect to the elongated body. Hinges may include any suitable mechanism for example, barrel hinges, spring hinges for flexibility, or any other suitable mechanism.
[00131] In some embodiments, the first temple arm 314 and/or the second temple arm 314 is provided with a sensor 326, 328. The sensor 326, 328 is mounted on the temple arm 314, 316 so as to engage or be in physical contact with the skull of the user when the wearable sensing device 300 is worn by the user. For example, the sensor 326, 328 may be mounted on a bottom face or an inner face of the temple arm 314, 316 so that sensor 326, 328 be located substantially on top of a ear of the user when sensor 326, 328 is mounted on the temple arm 314, 316 when the wearable sensing device 300 is worn by the user.
[00132] For example, in the illustrated embodiment, the first temple arm 314 is provided with a sensor 326 mounted to a bottom face thereof and the second temple arm 316 is provided with a sensor 328 mounted to a bottom face thereof. Each sensor 326, 328 is located at a position along the length of its respective arm 314, 316 that is chosen so that the sensor 326, 328 engages a respective section of the skull of the user that is located on top of a respective ear, when the wearable sensing device 300 is worn by the user.
[00133] In some embodiments, the wearable sensing device 300 may include at least one lens of any suitable type, which are not shown to more clearly illustrate other features of the wearable sensing device 300. It should be understood that the wearable sensing device 300 may comprise further components such as a battery for powering the sensors, a processor, communication means, etc.
[00134] In at least some embodiment, the sensor(s) 320 is(are) to be used as active sensor(s) while the sensor 324 is to be used as a reference and/or ground sensor. The sensor 320 is configured for measuring a biosignal which is to be used as an active signal while the sensor 324 is configured for measuring another biosignal which is to be used as a reference and/or ground signal.
[00135] In some embodiments which comprise at least one sensor 326, 328 located on a temple arm, the sensor 326, 328 may also be used as a reference and/or ground sensor to generate a reference and/or ground signal.
[00136] In other embodiments which comprise at least one sensor 326, 328 located on a temple arm, the sensor 326, 328 may also be used as an active sensor to generate an active signal.
[00137] It is contemplated that other arrangements of sensors may be used. In this arrangement, the wearable sensing device 300 may collect signals indicative of a user’s facial movements using the sensors 320, 322, 324, and 328 without providing more obstruction to the user’s vision than a pair of glasses. The wearable sensing device 300 may provide the biosignals for processing. The processing may include storing the biosignals, displaying the biosignals, performing various biopotential measurements, and/or the like.
[00138] Referring to Figure 4, there is illustrated a first exemplary wearable sensing device 400 having a shape comparable to that of a pair of earpieces 401 each comprising a earbud 404 insertable into a ear canal of a user.
[00139] In some embodiments, the wearable sensing device 400 may include a plurality of sensors/electrodes, such as one or more active sensors 402 in an inner portion 404 of one or both earpieces 401 that would be positioned within the ear canal of the user, one or more reference sensors 406 on an outer portion 408 of one earpiece 401 that would be in contact
with the outer ear of the user, and one or more ground sensors 410 on an outer portion 412 of the other earpiece 401. It is contemplated that other arrangements of sensors may be used. In this arrangement, the wearable sensing device 400 may collect signals indicative of a user’s facial movements using the sensors 402, 406, 410, while being comfortably worn by the user.
[00140] Referring to Figure 5, there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 420 is illustrated. The earbud 420 extends longitudinally between a top portion or end 422 and a bottom portion or end 424 and an outer wall 426 extends between the top and bottom portions 422 and 424. The cross-section of the wall 426 taken at any position along the longitudinal axis is provided with a substantially circular shape. The outer wall 426 faces and at least partially engages the ear canal of the user when the wearable sensing device 420 is worn by the user. A plurality of sensors/electrodes 428, 430, and 432 are mounted on the external face of the wall 426 so as to engage with the ear canal when the earbud 420 is worn by the user. In the illustrated embodiment, the sensors 428, 430, and 432 each have a linear shape, extend longitudinally on the wall 426 between the top and bottom ends 422 and 424 and have a respective radial position about the wall 426. The sensors 428, 430, and 432 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal. For example, the sensors 428, 430, and 432 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor. The wearable sensing device 420 may collect signals indicative of a user’s facial movements using the sensors 428, 430, 432, while being comfortably worn by the user for example.
[00141] In some embodiments, the sensors 428, 430, and 432 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal. For example, the sensors 428 may be active sensors and the sensors 430 may be ground sensors while the sensors 432 are reference sensors.
[00142] In other embodiments, the sensors 428, 430, and 432 do not have a predefined function and their function is dynamically assigned using the method described above for example.
[00143] Referring to Figure 6, there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 440 is illustrated. The earbud 440 extends longitudinally between a top portion or end 442 and a bottom portion or end 444 and an outer wall 446 extends between the top and bottom portions 442 and 444. The cross-section of the wall 446 taken at any position along the longitudinal axis is provided with a substantially circular shape. The outer wall 446 faces and at least partially engages the ear canal of the user when the wearable sensing device 440 is worn by the user. A plurality of sensors/electrodes 448, 450, and 452 are mounted on the external face of the wall 446 so as to engage with the ear canal when the earbud 440 is worn by the user. In the illustrated embodiment, the sensors 448, 450, and 452 each have a linear shape, extend longitudinally on the wall 446 between the top and bottom ends 442 and 444 and have a respective radial position about the wall 446. The sensors 448, 450, and 452 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal. For example, the sensors 448, 450, and 452 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor. The wearable sensing device 440 may collect signals indicative of a user’s facial movements using the sensors 448, 450, and 452, while being comfortably worn by the user for example.
[00144] In comparison to the earpiece of Figure 5, the sensors 448, 450, and 452 each have a width that is larger than the width of the sensors 428, 430, and 432.
[00145] In some embodiments, the sensors 448, 450, and 452 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal. For example, the sensors 448 may be active sensors and the sensors 450 may be ground sensors while the sensors 452 are reference sensors.
[00146] In other embodiments, the sensors 448, 450, and 452 do not have a predefined function and their function is dynamically assigned using the method described above for example.
[00147] Referring to Figure 7, there is illustrated a further exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 460 is illustrated. The earbud 460 extends longitudinally between a top portion or end 462 and a
botom portion or end 464 and an outer wall 466 extends between the top and botom portions 462 and 464. The cross-section of the wall 466 taken at any position along the longitudinal axis is provided with a substantially circular shape. The outer wall 466 faces and at least partially engages the ear canal of the user when the wearable sensing device 460 is worn by the user. A plurality of sensors/electrodes 468, 470, and 472 are mounted on the external face of the wall 466 so as to engage with the ear canal when the earbud 460 is worn by the user. In the illustrated embodiment, the sensors 468, 470, and 472 each have a dot shape, and are located a respective radial position about the earbud 420 and a respective longitudinal position between the top and botom ends 642 and 464. The sensors 468, 470, and 472 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal. For example, the sensors 468, 470, and 472 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor. The wearable sensing device 460 may collect signals indicative of a user’s facial movements using the sensors 468, 470, and 472, while being comfortably worn by the user for example.
[00148] In some embodiments, the sensors 468, 470, and 472 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal. For example, the sensors 468 may be active sensors and the sensors 470 may be ground sensors while the sensors 472 are reference sensors.
[00149] In other embodiments, the sensors 468, 470, and 472 do not have a predefined function and their function is dynamically assigned using the method described above for example.
[00150] Referring to Figure 8, there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 480 is illustrated. The earbud 480 extends longitudinally between atop portion or end 482 and a botom portion or end 484 and an outer wall 486 extends between the top and botom portions 482 and 484. The cross-section of the wall 486 taken at any position along the longitudinal axis is provided with a substantially circular shape. The outer wall 486 faces and at least partially engages the ear canal of the user when the wearable sensing device 480 is worn by the user. A plurality of sensors/electrodes 488, 490, and 492 are mounted on the external face of the wall 486 so as to engage with the ear canal when the earbud 480 is worn
by the user. In the illustrated embodiment, the sensors 488, 490, and 492 each have a circular shape, extend along a circumference of the earbud 480 at a respective longitudinal position between the top and bottom ends 482 and 484. The sensors 488, 490, and 492 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal. For example, the sensors 488, 490, and 492 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG) sensor. The wearable sensing device 480 may collect signals indicative of a user’s facial movements using the sensors 488, 490, and 492, while being comfortably worn by the user for example.
[00151] In some embodiments, the sensors 488, 490, and 492 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal. For example, the sensors 488 may be active sensors and the sensors 490 may be ground sensors while the sensors 492 are reference sensors.
[00152] In other embodiments, the sensors 488, 490, and 492 do not have a predefined function and their function is dynamically assigned using the method described above for example.
[00153] Referring to Figure 9, there is illustrated another exemplary wearable sensing device having a shape comparable to that of a pair of earpieces of which only the earbud 493 is illustrated. The earbud 493 extends longitudinally between a top portion or end 494 and a bottom portion or end 495 and an outer wall 496 extends between the top and bottom portions 494 and 495. The cross-section of the wall 496 taken at any position along the longitudinal axis is provided with a substantially circular shape. The outer wall 496 faces and at least partially engages the ear canal of the user when the wearable sensing device 493 is worn by the user. A plurality of sensors/electrodes 497, 498, and 499 are mounted on the external face of the wall 496 so as to engage with the ear canal when the earbud 493 is worn by the user. In the illustrated embodiment, the sensors 497, 498, and 499 each have a circular shape, extend along a circumference of the earbud 493 at a respective longitudinal position between the top and bottom ends 494 and 495. The sensors 497, 498, and 499 are each configured for measuring a biosignal such as an electrochemical or a biopotential signal. For example, the sensors 497, 498, and 499 may each comprise an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and/or an electromyography (EMG)
sensor. The wearable sensing device 493 may collect signals indicative of a user’s facial movements using the sensors 497, 498, and 499, while being comfortably worn by the user for example.
[00154] In comparison to the earpiece of Figure 8, the sensors 497, 498, and 499 each have a width that is larger than the width of the sensors 488, 490, and 492.
[00155] In some embodiments, the sensors 497, 498, and 499 each have a predefined function, i.e., their measured signal is to be used as an active signal or a ground and/or reference signal. For example, the sensors 497 may be active sensors and the sensors 498 may be ground sensors while the sensors 499 are reference sensors.
[00156] In other embodiments, the sensors 497, 498, and 499 do not have a predefined function and their function is dynamically assigned using the method described above for example.
[00157] Referring to Figures 10A and 10B, there is illustrated an exemplary wearable sensing device 500 having a shape comparable to that of a pair of headphones that can be worn over ears. The wearable sensing device 500 includes earcups 504 and 508 for engaging ears of a user. The earcups 504 and 508 include inner edges 501. The inner edges 501 face the ears of the user when the wearable sensing device 400 is worn by the user. It is to be noted that in some embodiments, the wearable sensing device 500 may include a single earcup instead of two.
[00158] The wearable sensing device 500 includes a plurality of sensors/electrodes, such as one or more active sensors 502 on the inner edges 501 of the earcups 504, 508, one or more reference sensors and/or ground sensors 506 on one earcup 504 and 508. When the wearable sensing device 500 is worn by the user, one or more reference sensors and/or ground sensors 506 are located on a mastoid bone of the user and in front of the ear, aligned with the eyes of the user. Such arrangement provides a direct physical contact of the ground sensors and/or reference sensors 506 with the skin of user. Direct physical contact of the ground sensors and/or reference sensors with the skin of a user has several benefits for example improved signal-to-noise ratio (SNR). By virtue of the disposing one or more reference sensors and/or ground sensors 506 on a mastoid bone of the user and in front of
the ear, aligned with the eyes of the user provides similar benefits to the wearable sensing device 500.
[00159] The plurality of sensors 502 and 506 are configured to measure active biosignals, ground biosignals and reference biosignals. The wearable sensing device 500 may collect signals indicative of a user’s facial movements using the sensors 502 and 506, while being comfortably worn by the user.
[00160] Referring to Figure 10A, in some embodiments, the wearable sensing device 500 may include a support body 510 (for example, headband) coupled to the two earcups 504 and 508 for holding the two earcups 504 and 508 over the ears of the user when the wearable sensing device 500 is worn by the user. The support body 510 engages a head portion of the user when the wearable sensing device 500 is worn by the user. The support body 510 may comprise an inner surface 512. To further improve the quality of various biopotential measurements, the wearable sensing device 500 may include one or more sensors 514 on the inner surface 512 for engaging a skull of the user when the wearable sensing device 500 is worn by the user. In some embodiments, at least some of the sensors 514 may include ground sensors and/or reference sensors to measure additional ground biosignals and/or reference biosignals. In some embodiments, at least some of the sensors 514 may include active sensors to additional active biosignals.
[00161] In other embodiments, the sensors 502, 506 and 514 do not have a predefined function and their function is dynamically assigned using the method described above for example.
[00162] Referring to Figures 11A-11C, there is illustrated an exemplary wearable sensing device 600 having a shape comparable to that of a virtual/augmented reality gear. The wearable sensing device 600 comprises a head mounted display 602 and support bodies 608 and 612.
[00163] The head mounted display 602 engages with a portion of the face of the user. The head mounted display 602 comprises an inner edge 604 facing the portion of the face of the user when the wearable sensing device 600 is worn by the user.
[00164] The wearable sensing device 600 includes a plurality of sensors/electrodes 606 on the inner edges 604 of the head mounted display 602. The plurality of sensors 606 may include active sensors and one or more reference sensors and ground sensors. When the wearable sensing device 600 is worn by the user, the plurality of sensors 606 are located on the portion of the face of the user. Such arrangement provides a direct physical contact of the plurality of sensors 606 with the skin of the user. A direct physical contact of the ground sensors and/or reference sensors with the skin of user has several benefits, for example, improved SNR. By virtue of the disposing plurality of sensors 606 the portion of the face of the user provides similar benefits to the wearable sensing device 600.
[00165] The plurality of sensors 502 and 506 are configured to measure active biosignals, ground biosignals and reference biosignals. The wearable sensing device 500 may collect signals indicative of a user’s facial movements using the sensors 502 and 506, while being comfortably worn by the user.
[00166] Referring to Figures 11B and 11C, in some embodiments, the wearable sensing device 600 may include support bodies 608 and 612 (for example, headbands) coupled to the head mounted display 602 for engaging a head portion of the subject and for holding head mounted display 602 over a portion of the face of the user when the wearable sensing device 600 is worn by the user.
[00167] The support bodies 608 and 612 may comprise inner surfaces 610 and 614. To further improve the quality of various biopotential measurements, the wearable sensing device 600 may include one or more sensors 616 on the inner surface 610 and one or more sensors 618 on the inner surface 614 for engaging a skull of the user when the wearable sensing device 600 is worn by the user. In some embodiments, at least some of the sensors 616 and 618 may include ground sensors and/or reference sensors to measure additional ground biosignals and/or reference biosignals. In some embodiments, at least some of the sensors 616 and 618 may include active sensors to additional active biosignals.
[00168] In the embodiments of Figures 3-9, 10A-10B, and 11A-11C, the sensors may be of any suitable type. In addition, it should be appreciated that the sensors may be able to detect signals even if they are at some distance from the source of the signal, movement, or
muscle group being detected. For example, signals relating to the movement of facial muscles may be detected by sensors located near the ear of the wearer.
[00169] In the embodiments of Figures 3-9, 10A-10B, and 11A-11C, the wearable sensing devices 300, 400, 500, and 600 may have integrated housings to house electronic elements. The housing within the wearable sensing devices 300, 400, 500, and 600 may be configured to accommodate components such as a processor, memory, or other electronic components. Additionally, the housing may include openings or ports for connectivity or other functional requirements. In some embodiments, the housing may further include batteries, communication modules, or any other electronic devices associated with the functionality of the wearable sensing devices 300, 400, 500, and 600.
[00170] Figure 12 illustrates one embodiment of a method 700 for operating sensors to obtain biosignals of a subject. The method 700 allows for determining, amongst a plurality of sensors configured to measure biosignals, which sensor(s) can be used as a reference and/or ground sensor and therefore which other sensor(s) can be used as an active sensor.
[00171] At step 702, impedance values of a plurality of sensors each located at a respective position on a subject are received. In some embodiments, the impedance values are received from a plurality of sensors configured to measure biosignals such as a plurality of biopotential sensors. In other embodiments, the impedance values are received from impedance sensors each located adjacent to a respective sensor configured to measure a biosignal. In this case, it will be understood that each impedance sensor and its respective biosignal sensor are considered as being located at the same location on the subject. In some embodiments, the plurality of sensors may be included in the wearable sensing devices, such as 300, 400, 500, and 600. The impedance values are received when the wearable sensing device, such as 300, 400, 500, and 600 are worn by a user.
[00172] In some embodiments, one or more current generators may be controlled for generating alternating currents at various positions on the body of the subject where a plurality of sensors are located. The one or more current generators may be included in the wearable sensing devices, such as 300, 400, 500, and 600. The value of the generated alternating current may have a known value. The known alternating current may be applied
to the plurality of sensors via an electrode -skin interface. The application of the know alternating current may produce voltages across the plurality of sensors.
[00173] In some embodiments, one or more voltage sensors may be controlled for measuring the voltages across each one of the plurality of sensors. The impedance value across the given sensor is measured based on the voltage across the given sensor and the known alternating current. The impedance across the given sensor may be measured as:
Z = (Measured Voltage * sqrt(2)) / (Known Current)
[00174] At step 704, the lowest impedance value amongst the impedance values received at step 702 is determined. A given one of the plurality of sensors associated with the lowest impedance value is identified based on the position at which the lowest impedance has been measured, i.e., the given identified sensor corresponds to the sensor that is located at the same position at which the lowest impedance has been measured. It is to be noted that a lower impedance value represents a better direct physical contact of the given with the skin of the user as compared to other sensors. Therefore, the identification of the given sensor associated with the lowest impedance value assists in improving the quality of active biosignals at least in terms of SNR.
[00175] In some embodiments, the two lowest impedance values of the impedance values are determined at step 702. Based on the two lowest impedance values, two given sensors of the plurality of sensors are then identified based on the locations at which the two lowest impedance values have been measured.
[00176] At step 706, the identified given sensor is operated as a reference sensor and/or a ground sensor to obtain at least one of a ground and/or reference biosignal. In addition, at least another sensor other than the given sensor is operated as an active sensor to obtain an active biosignal.
[00177] In some embodiments, where two given sensors are identified, one of the two given sensors is operated as a ground sensor to obtain the ground biosignal and the other one of the two given sensors is operated as a reference sensor to obtain the reference biosignal. In addition, another sensor other than the two given sensors are operated as an active sensor to obtain the active biosignal.
[00178] Once the sensors are operated to obtain the active biosignal and the reference biosignal and/or the ground biosignal, collaboratively referred to as an input signal, the input signal is provided for further processing.
[00179] Referring to Figure 13, a method 800 will be described. The method 800 may perform one or more functions based on the input signal, such as determining facial movements, determining an emotional or mental state of the wearer, or any other suitable determination.
[00180] At 802, an input signal is received from one or more sensors. At least some of the one or more sensors may be disposed on a device such as the devices 300, 400, 500 described above. The input signal from the sensors may include one or more brain signals. The input signal from the sensors may include one or more biosignals that may be indicative of the movements of particular muscles or muscle groups of the wearer. In some embodiments, the input signal received does not include or require any intentional input from the wearer, and only requires detection of one or more movements or physical states of the wearer.
[00181] At 804, the received input signal is processed to identify one or more gestures or movements of the user. This processing will be discussed below in further detail.
[00182] At 806, the determined gestures or movements of the user are mapped to one or more actions to be taken. The action to be taken may depend on an electronic device currently being used by the user. For example, when the user is operating a computer mouse, the determined gestures or movements could be used to direct the movement of the cursor or operate the mouse buttons. In another example, if a user is in an electric wheelchair, the determined gestures or movements could be used to direct the movement of the wheelchair. In other situations, the determined gestures or movements could be used to operate one or more devices or appliances, such as lights, televisions, or speakers, or other internet of things (loT) devices that are connectable to a communications network. The mapping between gestures and actions may be preprogrammed, or may be configurable by the user.
[00183] At 808, a command is transmitted to the appropriate device to perform the one or more actions determined at 806.
[00184] Referring to Figure 9, a method 900 of processing a received signal to identify one or more gestures or movements of the user will be described. The method 900 may correspond to step 804 of Figure 13.
[00185] Training data from human participants is collected at 902. This step may optionally be omitted, for example if training data has previously been collected or is otherwise provided, or if the Al has already been trained, in which case the method 900 begins at step 808. The training data is collected using a standardized protocol. Each participant wears a training device with a specified number and configuration of sensors, which may correspond to the arrangement of sensors on the user device. The participant is asked to execute specific gestures, for example by displaying instructions on a screen facing the participant. The biopotential signals and/or sensor data are recorded by the device, and associated with the known gestures that were performed. This process may be repeated with multiple participants, for example to ensure a large amount of data and account for interpersonal variability in the observed signals. The intended user of a particular device may optionally be used as the training participant to obtain training data for a particular device, so that the device is adapted to the signals generated by the user while performing the target gestures.
[00186] At 904, the training data is preprocessed using standard techniques known to persons skilled in the art. The techniques applied may include applying one or more of: applying a low pass filter with a cutoff frequency, applying a high pass filter (such as with an equation of the form y[n] = x[n] - y[n-l]); or resampling. The generated dataset is then divided into time windows for each time segment, at predetermined intervals.
[00187] At 906, the Al model is trained, using the preprocessed data, to identify the gestures. The Al model may include a transformer. In preparation for the transformer, each time window of data may be divided into a predetermined number of tokens corresponding to the number of time points to be encoded by the transformer block. The windows are transposed and passed to another transformer block, where the tokens are mapped to a higher dimension, where positional encoding can be added to each input. Hyper parameters, such as the number of layers of the transformer, dimension of the tokens, and others, may be tuned by applying Grid Search or Bayesian optimization. Each transformer block is assigned a randomly initialized token, and its parameters are updated using gradient descent. A loss function is applied, such as cross entropy loss for classification, and mean squared error or
mean absolute error for regression. A cross user train-test split can be used to train and test the Al model.
[00188] Alternatively, an Al model such as a deep neural network (NN), a convolutional NN (CNN), a long short-term memory (LSTM) NN, or other temporal-based architecture, or a combination of these models, may be used. In this approach, hyperparameters, such as depth, kernel size, dilation rate, fdters, and others may be tuned to optimize the system for speed and accuracy. The time windows are used as the inputs. A temporal convolution is applied to each channel of inputs, followed by a residual connection after each layer.
[00189] For example, using an LSTM NN, a sliding window can be applied to the data, with each window frame containing either a target signal or an absence of a target signal. Models for target signals may be obtained by using training data based on test subjects who are asked to perform different movements or gestures while wearing devices equipped with similar sensors. Thus, a degree of variation in the target signal can also be determined based on the training data and used to identify target signals that are not identical to the training data, for example due to interpersonal variations or noise. The Al may use machine learning based on the detected signals to continue to adapt to a particular user over time, which may provide increasingly accurate determinations.
[00190] At 908, the trained Al may be used to identify gestures made by a wearer of the device, based on signals received from the sensors. Depending on the extent of the training, the Al may be able to identify different types of gestures or movements, such as one or more of: eye movements, such as saccades, single winks, or intentional and unintentional dual blinks; facial movements, such as teeth clenching, tongue movements, frowning, smiling, nose twitching, or other motions and expressions. Furthermore, multiple signals can be detected in parallel, which may permit an understanding of the intent and state of the user.
[00191] If a user is mobility impaired, for example if the user does not have control of all of the muscles that would be detected by the sensors, the muscles and muscle groups that they have control over can be identified, and then solutions can be adapted specifically to their needs. This may include modifying sensor placement, sensor position, sensor material, and sensor geometry. Additionally or alternatively, the sensor inputs may be used to detect other states, for example emotional states, such as anxiety, stress, relaxation, and others;
mental states, such as distraction, focus, or seizure states; or body states, such as sitting, walking, or running. The device may be configured to map the actions to be performed at 808 to movements or states that the user is capable of generating and the device is capable of detecting and identifying. It is contemplated that additional training of the Al model may be required or desired to ensure that the device can accurately identify the movements and gestures of the user. Additional body-mounted or body-pointed sensors may also be provided, to produce additional signal inputs that may assist in making these or additional determinations when combined with the sensors described above. The additional sensors may provide additional information about the activity, stance, posture, or other movement of the user to assist the Al model in making determinations.
[00192] In some embodiments, for people with movement disabilities, such as paraplegia, quadriplegia, paralysis, or other movement disorders, movements of specific muscle groups within the user’s control can be mapped to different commands in order to control devices. In one embodiment, if a paralyzed user is only able to rotate his head and blink, he could steer a wheelchair by indicating a direction using his head, and use blinks to start or stop. Other mappings of gestures to actions may alternatively be used. In one embodiment, a user can control a cursor and operate a virtual mouse in order to make selections on a screen, enabling the user to communicate or type without the need of assistance or other specialized devices. This platform can be connected through Bluetooth™ or wifi™ or other connection protocols to assistive devices, such as control switches, feeding robots, electric wheelchairs, and others, as well as to conventional electronic devices, including computers, phones, television sets, radio or speaker sets, and others. The determination of emotion and physical states can also be used to create a virtual avatar of the user, mapping their real emotions, expressions, and actions to a digital representation.
[00193] A direct physical contact of the ground sensors and/or reference sensors with the skin (for example, forehead) presents various benefits. A direct contact with the skin can be more effective in capturing accurate and reliable biosignals compared to non-contact or through-clothing sensors. Direct skin contact allows for a more efficient and stable interface between the ground sensors and/or reference sensors and the body of the user, resulting in improved signal quality for various biopotential measurements. Advantages of having the direct skin contact include reduced signal interference, enhanced signal amplitude, improved
signal stability, better electrode-skin coupling, minimized environmental interference, improved signal -to-noise ratio (SNR).
[00194] Referring to Figure 15, there is shown a computing device 100 suitable to implement various embodiments of the present disclosure. The computing device 100 comprising various hardware components including one or more single or multi-core processors collectively represented by processor 110, a graphics processing unit (GPU) 111, a solid-state drive 120, a random -access memory 130, a display interface 140, and an input/output interface 150.
[00195] Communication between the various components of the computing device 100 may be enabled by one or more internal and/or external buses 160 (e.g., a PCI bus, universal serial bus, IEEE 1394 “Firewire” bus, SCSI bus, Serial-ATA bus, etc.), to which the various hardware components are electronically coupled.
[00196] The input/output interface 150 may be coupled to a touchscreen 190 and/or to the one or more internal and/or external buses 160. The touchscreen 190 may be part of the display. In one or more implementations, the touchscreen 190 is the display. The touchscreen 190 may equally be referred to as a screen 190. In the implementations illustrated in Figure 1, the touchscreen 190 comprises touch hardware 194 (e.g., pressuresensitive cells embedded in a layer of a display allowing detection of a physical interaction between a user and the display) and a touch input/output controller 192 allowing communication with the display interface 140 and/or the one or more internal and/or external buses 160. In one or more implementations, the input/output interface 150 may be connected to a keyboard (not shown), a mouse (not shown) or a trackpad (not shown) allowing the user to interact with the computing device 100 in addition or in replacement of the touchscreen 190.
[00197] According to embodiments of the present disclosure, the solid-state drive 120 stores program instructions suitable for being loaded into the random-access memory 130 and executed by the processor 110 and/or the GPU 111. For example, the program instructions may be part of a library or an application.
[00198] The computing device 100 may be implemented as a server, a desktop computer, a laptop computer, a tablet, a smartphone, a personal digital assistant or any
device that may be configured to implement the present technology, as it may be understood by a person skilled in the art.
[00199] The embodiments described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the appended claims.
Claims
1. A method of identifying a facial gesture performed by a user, the method comprising: receiving at least one signal input from at least one sensor in contact with a head of the user; identifying at least one gesture of the user corresponding with the at least one received signal; mapping the at least one identified gesture to at least one action to be taken; and transmitting command to an electronic device to perform the action.
2. The method of claim 1, wherein identifying the at least one gesture is performed by a neural network (NN).
3. The method of claim 2, wherein the NN comprises a transformer.
4. The method of any of claims 2 to 3, wherein the neural network comprises at least one of: a deep NN, a convolutional NN (CNN), and a long short-term memory (LSTM) NN.
5. The method of any of claims 1 to 4, wherein the at least one gesture comprises at least one of: a blink; a wink; a jaw movement; and a head movement.
6. The method of any of claims 1 to 5, wherein the at least one sensor is included in a wearable sensing device.
7. The method of claim 6, wherein the at least one sensor is disposed near or in an ear of the user.
8. The method of any of claims 1 to 7, wherein at least one sensor comprises at least one of: an electroencephalography (EEG) sensor; an electrooculography (EOG) sensor; and an electromyography (EMG) sensor.
9. The method of any of claims 1 to 8, wherein the sensor is capable of detecting an electrochemical or biopotential signal.
10. The method of any of claims 1 to 9, wherein the sensor comprises one or more of: a conductive polymer; a conductive filler material; carbon nanotubes; and silver nanoparticles.
11. The method of any of claims 1 to 10, wherein the at least one sensor is configured to detect a specific facial gesture of the user.
12. A system of identifying a facial gesture performed by a user, the system comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non- transitory storage medium comprising computer-readable instructions; the processor, upon executing the instructions, being configured for: receiving at least one signal input from at least one sensor in contact with a head of the user; identifying at least one gesture of the user corresponding with the at least one received signal; mapping the at least one identified gesture to at least one action to be taken; and transmitting command to an electronic device to perform the action.
13. The system of claim 12, wherein identifying the at least one gesture is performed by a neural network (NN).
14. The system of claim 13, wherein the NN comprises a transformer.
15. The system of any of claims 13 to 14, wherein the neural network comprises at least one of: a deep NN, a convolutional NN (CNN), and a long short-term memory (LSTM) NN.
16. The system of any of claims 12 to 15, wherein the at least one gesture comprises at least one of: a blink; a wink; a jaw movement; and a head movement.
17. The system of any of claims 12 to 16, wherein the at least one sensor is included in a wearable sensing device.
18. The system of claim 17, wherein the at least one sensor is disposed near or in an ear of the user.
19. The system of any of claims 12 to 18, wherein the at least one sensor comprises at least one of: an electroencephalography (EEG) sensor; an electrooculography (EOG) sensor; and an electromyography (EMG) sensor.
20. The system of any of claims 12 to 19, wherein the sensor is capable of detecting an electrochemical or biopotential signal.
21. The system of any of claims 12 to 20, wherein the sensor comprises one or more of: a conductive polymer; a conductive fdler material; carbon nanotubes; and silver nanoparticles.
22. The system of any of claims 12 to 21, wherein the at least one sensor is configured to detect a specific facial gesture of the user.
23. A method of training a neural network (NN) for identifying a facial gesture, the method comprising: collecting training data from at least one performing one or more predetermined facial gestures; preprocessing the training data; and providing the preprocessed training data as an input to the NN.
24. The method of claim 23, wherein the training data comprises sensor data from one or more sensors of at least one wearable sensing device worn by at least one user.
25. The method of claim 24, wherein the at least one user is an intended user of the wearable sensing device.
26. The method of any of claims 23 to 25, wherein preprocessing the training data comprises using at least one transformer to determine at least one temporal parameter of the training data.
27. The method of any of claims 23 to 26, wherein preprocessing the training data comprises dividing the training data into a plurality of time windows.
28. A system of training a neural network (NN) for identifying a facial gesture, the system comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non- transitory storage medium comprising computer-readable instructions; the processor, upon executing the instructions, being configured for: collecting training data from at least one performing one or more predetermined facial gestures; preprocessing the training data; and providing the preprocessed training data as an input to the NN.
29. The system of claim 28, wherein the training data comprises sensor data from one or more sensors of at least one wearable sensing device worn by at least one user.
30. The system of claim 29, wherein the at least one user is an intended user of the wearable sensing device.
31. The system of any of claims 28 to 30, wherein preprocessing the training data comprises using at least one transformer to determine at least one temporal parameter of the training data.
32. The system of any of claims 28 to 31, wherein preprocessing the training data comprises dividing the training data into a plurality of time windows.
33. A wearable sensing device comprising: an elongated body extending between a first end and a second end and comprising a body inner face, the body inner face facing a subject when the wearable sensing device is worn by the subject; a first temple arm extending from the first end of the elongated body and a second temple arm extending from the second end of the elongated body, the first temple arm and the second temple arm each for engaging a respective ear of the subject; a bridge body mounted to the elongated body, the bridge body being configured for engaging a nose of the subject and comprising a first arm and a second arm each for abutting a respective side of the nose, each one of the first arm and the second arm comprising a respective arm inner face for engagement with the nose of the subject; at least one first sensor mounted to the inner face of the elongated body so as to engage a forehead of the subject when the wearable sensing device is worn by the subject, the at least one first sensor for measuring at least one of a ground biosignal and a reference biosignal; and at least one second sensor positioned on the respective arm inner face of at least one of the first arm and the second arm so as to engage a nasal bone of the nose when the wearable sensing device is worn by the subject, the at least second sensor for measuring at least one active biosignal.
34. The wearable sensing device of claim 33, wherein the at least one first sensor comprises a reference sensor and a ground sensor each mounted to the inner face
of the elongated body so as to engage a forehead of the subject when the wearable sensing device is worn by the subject.
35. The wearable sensing device of claim 33 or 34, wherein the elongated body comprises a protrusion projecting from the inner face thereof, the at least one first sensor being mounted to the protrusion.
36. The wearable sensing device of claim 35, wherein the protrusion is provided with a curved shape.
37. The wearable sensing device of claim 35 or 36, wherein the protrusion is located substantially in the middle of the elongated body.
38. The wearable sensing device of any one of claims 33 to 37, further comprising at least one third sensor mounted to at least one of the first temple arm and the second temple arm for engaging a skull of the subject when the wearable sensing device is worn by the subject, the at least one third sensor for measuring at least one of an additional ground biosignal and an additional reference biosignal.
39. The wearable sensing device of claim 38, wherein the at least one third sensor comprises two biosignal sensors each mounted to a respective one of the first temple and the second temple arm.
40. The wearable sensing device of any one of claims 33 to 39, wherein the at least one first sensor, the at least one second sensor and the at least one third sensor each comprise at least one of an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor and an electromyography (EMG) sensor.
41. The wearable sensing device of any one of claims 33 to 40, wherein the first temple arm and the second temple arm are each rotatably mounted to the elongated body.
42. A method for operating sensors to obtain biosignals, the method comprising: receiving impedance values of a plurality of sensors, each measured at a respective position on a body of a subject, the plurality of sensors being located at the respective position on the body of the subject;
determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
43. The method of claim 42, wherein said determining a lowest impedance value comprises determining two lowest impedance values and said identifying a given one of the plurality of sensors comprises identifying two given ones of the plurality of sensors.
44. The method of claim 42 or 43, wherein said operating the given one of the plurality of sensors comprises operating a first one of the two given ones as a ground sensor to obtain the ground biosignal, and a second one of two given ones as a reference to obtain the reference biosignal.
45. The method of any one of claims 42 to 44, further comprising: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
46. A system for operating sensors to obtain biosignals, the system comprising: a processor; a non-transitory storage medium operatively connected to the processor, the non- transitory storage medium comprising computer-readable instructions;
the processor, upon executing the instructions, being configured for: receiving impedance values of a plurality of sensors, each measured at a respective position on a body of a subject, the plurality of sensors being located at the respective position on the body of the subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
47. The system of claim 46, wherein the processor is configured for determining two lowest impedance values and said identifying a given one of the plurality of sensors comprises identifying two given ones of the plurality of sensors.
48. The system of claim 47, wherein the processor is configured for operating a first one of the two given ones as a ground sensor to obtain the ground biosignal, and operating a second one of two given ones as a reference sensor to obtain the reference biosignal.
49. The system of any one of claims 46 to 48, wherein the processor is further configured for: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject; controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
50. A system for obtaining biosignals, the system comprising: a wearable sensing device mountable to a head of a user and comprising a plurality of sensors each to be located at a respective position on the head when the wearable sensing device is worn by the user; a processor; a non-transitory storage medium operatively connected to the processor, the non- transitory storage medium comprising computer-readable instructions; the processor, upon executing the instructions, being configured for: receiving impedance values of the plurality of sensors, each measured at a respective position on a body of a subject; determining a lowest impedance value amongst the impedance values and identifying a given one of the plurality of sensors associated with the lowest impedance value, the plurality of sensors comprising the given one of the plurality of sensors and remaining sensors; operating the given one of the plurality of sensors to obtain at least one of a ground biosignal and a reference biosignal; and operating at least one of the remaining sensors to obtain an active biosignal.
51. The system of claim 50, wherein the wearable sensing device comprises: at least one earcup for engaging an ear of a subject and comprising an inner edge, the inner edge facing the ear of the subject when the wearable sensing device is worn by the subject; a support body coupled to the at least one earcup for holding the at least one earcup over the ear of the subject when the wearable sensing device is worn by the subject; and wherein the plurality of sensors are located on the inner edge of the earcup.
52. The system of claim 51, wherein the at least one earcup comprises two earcups.
53. The system of claim 51 or 52, wherein when the wearable sensing device is worn by the subject, at least one sensor of the plurality of sensors is located on a mastoid bone of the subject.
54. The system of claim 51 or 52, wherein when the wearable sensing device is worn by the subject, at least one of the plurality of sensors is located in front of the ear, aligned with an eye of the subject.
55. The system of claim 51 or 52, wherein the support body engages a head portion of the subject when the wearable sensing device is worn by the subject and comprises an inner surface, at least one of the plurality of sensors being located on the inner surface of the support body.
56. The system of claim 50, wherein the wearable sensing device comprises: at least one earpiece comprising a earbud configured for insertion in an ear canal of a subject, the earbud comprising an outer portion configured for facing and at least partially engaging the ear canal of the subject when the wearable sensing device is worn by the subject, wherein the plurality of sensors are located on the outer portion of the earbud.
57. The system of claim 50, wherein the wearable sensing device comprises: at least one earpiece comprising a earbud configured for insertion in an ear canal of a subject, the earbud comprising a top portion, a bottom portion opposite to the top portion and an outer wall extending between the top portion and the bottom portion, the outer wall configured for facing and at least partially engaging the ear canal of the user when the wearable sensing device is worn by the user, wherein the plurality of sensors are mounted on the outer wall of the earbud so as to engage with the ear canal when the earbud is worn by the user.
58. The system of claim 57, wherein the plurality of sensors extend longitudinally on the outer wall between the top portion and the bottom portion.
59. The system of any one of claim 58, wherein the plurality of sensors have a respective radial position about the outer wall.
60. The system of claim 57, wherein the plurality of sensors have a dot shape.
61. The system of claim 60, wherein the plurality of sensors have a respective radial position about the outer wall and a longitudinal position between the top portion and the bottom portion.
62. The system of claim 57, wherein the plurality of sensors extend along a circumference of the outer wall between the top portion and the bottom portion.
63. The system of claim 62, wherein the plurality of sensors have a respective longitudinal position between the top portion and the bottom portion.
64. The system of claim 62 or 63, wherein the plurality of sensors have a circular shape.
65. The system of claim 50, wherein the wearable sensing device comprises: a head mounted display for engaging a portion of face of a subject and comprising an inner edge, the inner edge facing the portion of face of the subject when the wearable sensing device is worn by the subject; a support body coupled to the head mounted display for engaging a head portion of the subject and for holding the head mounting display over the portion of face of the subject when the wearable sensing device is worn by the subject and comprising and inner surface; and wherein the plurality of sensors are located on at least one of the inner edges of the head mounted display and the inner surface of the support body.
66. The system of any one of claims 50 to 65, wherein the processor is further configured for: controlling at least one generator for generating a respective alternating current at the respective position on the body of the subject;
controlling at least one voltage sensor for measuring a respective voltage at the respective position on the body of the subject; and determining the impedance values based on the respective alternating current and the respective voltage.
67. The system of any one of claims 50 to 66, wherein each one of the plurality of sensors comprises one of an electroencephalography (EEG) sensor, an electrooculography (EOG) sensor, and an electromyography (EMG) sensor.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363486104P | 2023-02-21 | 2023-02-21 | |
| US63/486,104 | 2023-02-21 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024176149A1 true WO2024176149A1 (en) | 2024-08-29 |
Family
ID=92500412
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2024/051684 Ceased WO2024176149A1 (en) | 2023-02-21 | 2024-02-21 | Method and system for determining facial movements |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024176149A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170332964A1 (en) * | 2014-12-08 | 2017-11-23 | Mybrain Technologies | Headset for bio-signals acquisition |
| US20200249752A1 (en) * | 2013-06-20 | 2020-08-06 | Uday Parshionikar | Gesture based user interfaces, apparatuses and systems using eye tracking, head tracking, hand tracking, facial expressions and other user actions |
| US20200319710A1 (en) * | 2017-01-19 | 2020-10-08 | Mindmaze Holding Sa | Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system |
| US20220187912A1 (en) * | 2020-12-15 | 2022-06-16 | Neurable, Inc. | Monitoring of biometric data to determine mental states and input commands |
-
2024
- 2024-02-21 WO PCT/IB2024/051684 patent/WO2024176149A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200249752A1 (en) * | 2013-06-20 | 2020-08-06 | Uday Parshionikar | Gesture based user interfaces, apparatuses and systems using eye tracking, head tracking, hand tracking, facial expressions and other user actions |
| US20170332964A1 (en) * | 2014-12-08 | 2017-11-23 | Mybrain Technologies | Headset for bio-signals acquisition |
| US20200319710A1 (en) * | 2017-01-19 | 2020-10-08 | Mindmaze Holding Sa | Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system |
| US20220187912A1 (en) * | 2020-12-15 | 2022-06-16 | Neurable, Inc. | Monitoring of biometric data to determine mental states and input commands |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240045503A1 (en) | Wearable computing device with electrophysiological sensors | |
| US20190246982A1 (en) | Method and system for collecting and processing bioelectrical signals | |
| Ando et al. | Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals | |
| US10575777B2 (en) | In-ear electrical potential sensor | |
| CN114222601A (en) | Wearable system for behind-the-ear sensing and stimulation | |
| US11494001B2 (en) | Gesture detection system for personal head wearable device | |
| EP4093265B1 (en) | Electrooculogram measurement and eye-tracking | |
| EP4107971B1 (en) | Control of parameters of hearing instrument based on ear canal deformation and concha emg signals | |
| US20210165489A1 (en) | Information processing system and non-transitory computer readable medium | |
| Gorur et al. | Glossokinetic potential based tongue–machine interface for 1-D extraction | |
| CN115844427A (en) | Eye detection method and apparatus | |
| López et al. | EOG-based system for mouse control | |
| TW202342150A (en) | In-ear electrodes for ar/vr applications and devices | |
| KR20160108967A (en) | Device and method for bio-signal measurement | |
| KR102057705B1 (en) | A smart hand device for gesture recognition and control method thereof | |
| Frey et al. | GAPses: Versatile smart glasses for comfortable and fully-dry acquisition and parallel ultra-low-power processing of EEG and EOG | |
| Bharadwaj et al. | Electrooculography: Analysis on device control by signal processing. | |
| Görür et al. | Tongue-operated biosignal over EEG and processing with decision tree and kNN | |
| US20230309860A1 (en) | Method of detecting and tracking blink and blink patterns using biopotential sensors | |
| WO2024176149A1 (en) | Method and system for determining facial movements | |
| US12089953B1 (en) | Systems and methods for utilizing intrinsic current noise to measure interface impedances | |
| WO2023129390A1 (en) | Monitoring cardiac activity using an in-ear device | |
| US12502129B2 (en) | Method and system for collecting and processing bioelectrical signals | |
| US12008163B1 (en) | Earbud sensor assembly | |
| Matthies et al. | Wearable Sensing of Facial Expressions and Head Gestures |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24759870 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |