[go: up one dir, main page]

WO2023230589A1 - Serre-tête présentant une surveillance de données à biocapteur - Google Patents

Serre-tête présentant une surveillance de données à biocapteur Download PDF

Info

Publication number
WO2023230589A1
WO2023230589A1 PCT/US2023/067519 US2023067519W WO2023230589A1 WO 2023230589 A1 WO2023230589 A1 WO 2023230589A1 US 2023067519 W US2023067519 W US 2023067519W WO 2023230589 A1 WO2023230589 A1 WO 2023230589A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
sensor
data
headband
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2023/067519
Other languages
English (en)
Inventor
Tam Vu
Galen Pogoncheff
Tony VU MANH TUAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Earable Inc
University of Colorado System
University of Colorado Colorado Springs
Original Assignee
Earable Inc
University of Colorado System
University of Colorado Colorado Springs
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Earable Inc, University of Colorado System, University of Colorado Colorado Springs filed Critical Earable Inc
Publication of WO2023230589A1 publication Critical patent/WO2023230589A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/024Measuring pulse rate or heart rate
    • A61B5/02416Measuring pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/251Means for maintaining electrode contact with the body
    • A61B5/256Wearable electrodes, e.g. having straps or bands
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/279Bioelectric electrodes therefor specially adapted for particular uses
    • A61B5/28Bioelectric electrodes therefor specially adapted for particular uses for electrocardiography [ECG]
    • A61B5/282Holders for multiple electrodes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/279Bioelectric electrodes therefor specially adapted for particular uses
    • A61B5/291Bioelectric electrodes therefor specially adapted for particular uses for electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/279Bioelectric electrodes therefor specially adapted for particular uses
    • A61B5/296Bioelectric electrodes therefor specially adapted for particular uses for electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/25Bioelectric electrodes therefor
    • A61B5/279Bioelectric electrodes therefor specially adapted for particular uses
    • A61B5/297Bioelectric electrodes therefor specially adapted for particular uses for electrooculography [EOG]: for electroretinography [ERG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • H04R5/0335Earpiece support, e.g. headbands or neckrests
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0204Acoustic sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0219Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/13Hearing devices using bone conduction transducers

Definitions

  • the present disclosure relates, in general, to a headband-based electronics device, and, more particularly, to a headband with biosensor data monitoring including, but not limited to, a headband-based biosensor system and a bone conduction speaker system.
  • Figs. 1A and IB are schematic diagrams illustrating various non-limiting examples of a system for implementing a headband with biosensor data monitoring, in accordance with various embodiments.
  • Figs. 2A-2H are schematic diagrams illustrating various non-limiting examples of a headband with biosensor data monitoring, in accordance with various embodiments.
  • FIGs. 3A-3D are schematic diagrams illustrating a non-limiting example of the bone conduction speaker assembly (and its components) of Figs. 1 and 2, in accordance with various embodiments.
  • FIG. 4 is a diagram illustrating a non-limiting example of a head of a user with labels for potential locations on the head for alignment and positioning of bone conduction speakers and/or sensors, in accordance with various embodiments.
  • Figs. 5A and 5B are diagrams illustrating various non-limiting examples of decomposing mixed signal data into multiple distinct sensor signal data each corresponding to one of the two or more different types of electrophysiological (“EP") sensors, in accordance with various embodiments.
  • EP electrophysiological
  • FIGs. 6A and 6B are flow diagrams illustrating a method for implementing a headband with biosensor data monitoring, in accordance with various embodiments.
  • FIG. 7 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
  • Various embodiments provide tools and techniques for implementing a headband-based electronics device, and, more particularly, for implementing a headband with bone conduction speakers including, but not limited to, a headbandbased speaker system and a bone conduction speaker system.
  • a headband-based electronics device and, more particularly, for implementing a headband with bone conduction speakers including, but not limited to, a headbandbased speaker system and a bone conduction speaker system.
  • the following detailed description illustrates a few embodiments in further detail to enable one of skill in the art to practice such embodiments.
  • the described examples are provided for illustrative purposes and are not intended to limit the scope of the invention.
  • some embodiments provide a headband with biosensor data monitoring.
  • the structure and configuration of the headband with biosensor data monitoring enables comfortable use, where pressure contact with blood vessels on the head of the user is minimized or avoided even when the headband is pressed against the head of the user (such as when the user is laying down, sitting back, or otherwise lounging or resting), with the side of the user's head on a pillow, cushion, or other surface.
  • the materials of the headband portion of the headband facilitate such pressure contact mitigation, even during activities that may otherwise cause shifting of biosensors in conventional headbased biosensor systems.
  • the use of one or more electrodes that each collect raw sensor signal data from each of two or more different types of electrophysiological (“EP") sensor as mixed signal data, and subsequent decomposition of the mixed signal data into their constituent signals provide for a more robust and more compact form factor for the headband-based biosensor system.
  • the headband-based biosensor system is enabled to monitor EP data from the user's head regardless of movement of the headband portion of the headband-based biosensor and/or temporary loss of contact, and/or the like.
  • a single contact point e.g., electrode
  • a single contact point that collects different signals for the different EP sensors
  • acquisition of data from specific regions of the brain and/or acquisition of data from specific facial muscles groups may be implemented in a manner that is robust, sustainable, accurate, and/or precise.
  • a headband in accordance with some embodiments can include bone conducting speakers and biosensor data monitoring.
  • biosensor data monitoring can be used to adjust sound played by the bone conducting speakers.
  • various operations of the methods described herein can be combined with operations of the methods described in the Incorporated Applications.
  • some embodiments can improve the functioning of user equipment or systems themselves (e.g., headband-based electronic device systems, headband-based biosensor systems, etc.), for example, by receiving, using a computing system, first electrophysiological (“EP”) sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography (“EEG”) sensor, an electrooculography (EOG) sensor, an electromyography (“EMG”) sensor, or an electrocardiography (“ECG”) sensor; applying, using the computing system, signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; analyzing, using the computing system, at
  • These functionalities can produce tangible results outside of the implementing computer system, including, merely by way of example, comfortable use of the headband-based biosensor system during day or night and during active use or while resting (by using smaller number of electrodes that each collect sensor data for different types of EP sensors, resulting in fewer, and better-designed placement of, potential pressure points due to the electrodes themselves) while enabling acquisition of data from specific regions of the brain and/or acquisition of data from specific facial muscles groups that may be implemented in a manner that is robust, sustainable, accurate, and/or precise, at least some of which may be observed or measured by users, head-based biosensor device manufacturers, and/or manufacturers of head-based electronics devices that include biosensor data monitoring.
  • a method may comprise receiving, using a computing system, first electrophysiological (“EP”) sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography (“EEG”) sensor, an electrooculography (EOG) sensor, an electromyography (“EMG”) sensor, or an electrocardiography (“ECG”) sensor, and/or the like.
  • EEG electroencephalography
  • EEG electrooculography
  • EMG electromyography
  • ECG electrocardiography
  • the method may also comprise applying, using the computing system, signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; and analyzing, using the computing system, at least one of the two or more decomposed, distinct sensor signal data each individually to perceive at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data was collected.
  • the biological state or condition comprises at least one of physiological, neurological, or cognitive state or condition, and/or the like.
  • the method perceives the at least one state or condition by determining that the sensor signals indicate a significant likelihood that the state or condition exists.
  • the method may further comprise analyzing, using the computing system, the at least one of the two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other decomposed, distinct sensor signal data or one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading; and, based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, sending, using the computing system, data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device.
  • the computing system may comprise at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of the headband-based biosensor system, a processor of one or more user devices among the at least one user device, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface (“Ul”) device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like.
  • Ul dedicated user interface
  • the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one or more other designated entities, and/or the like.
  • the headband portion when the headband-based biosensor system is worn by the user, the headband portion may be wrapped around a forehead of the user and above both ears of the user, or the like.
  • the headband portion may be made of one or more materials comprising at least one of polyurethane, thermoplastic polyurethane ("TPU”), silicone, or polycarbonate (“PC”), and/or the like.
  • the method may further comprise receiving, using the computing system, first non-EP sensor data from each of one or more first non-EP sensors, the received first non-EP sensor data comprising the one or more non-EP sensor data; and analyzing, using the computing system, the received first non-EP sensor data individually, wherein perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on individual analysis of the at least one of the two or more decomposed, distinct sensor signal data and based at least in part on individual analysis of the first non-EP sensor data.
  • the one or more first non-EP sensors may each comprise at least one of a photoplethysmography (“PPG”) sensor, an inertial measurement unit (“IMU”) sensor, an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the like.
  • the one or more first non-EP sensors may comprise at least one of one or more non-EP sensors that are disposed within or on the headband-based biosensor system or one or more non-EP sensors that are disposed external, yet communicatively coupled, to the headband-based biosensor system.
  • perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving the at least one biological and/or psychological state or condition of the user based at least in part on at least one of: one or more detected events comprising at least one of one or more onetime events, one or more recurring events, one or more short duration events, one or more long duration events, one or more instantaneous events, or two or more concurrent events, and/or the like, the one or more detected events corresponding to at least one of time, frequency, time-frequency, or latent representations, and/or the like, of the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or a combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like; one or more identified patterns in the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the
  • the biological data of each user may comprise the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like.
  • the method may further comprise: receiving, using the computing system, second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode, the received second EP sensor data from each of the one or more second electrodes comprising one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes; applying, using the computing system, signal processing to the received second EP sensor data to decompose the second mixed signal data from each second electrode into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; and individually analyzing, using the computing system, the two or more decomposed, distinct sensor signal data corresponding to each second electrode, wherein perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one
  • high fidelity of at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like, regardless of motion of the user, orientation of a head of the user, or the headbandbased biosensor system being pressed up against the head of the user may be achieved based at least in part on at least one of: one or more first algorithms configured to evaluate channel quality of at least one of a first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more second algorithms configured to perform at least one of selecting, referencing, or rejecting one or more portions or components of at least one of the first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more third algorithms configured to reduce or suppress at least one of signal artifacts or signal noise in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like; hardware-based a
  • At least one of the one or more first algorithms, the one or more second algorithms, the one or more third algorithms, one or more first models corresponding to the one or more first algorithms, one or more second models corresponding to the one or more second algorithms, or one or more third models corresponding to the one or more third algorithms, and/or the like may be at least one of developed or updated using at least one of supervised machine learning, unsupervised machine learning, semi-supervised machine learning, self-supervised machine learning, reinforcement-based machine learning, statistical modeling, heuristic-based machine learning, or rule-based machine learning, and/or the like.
  • any sensor signal noise in at least one of the first EP sensor data or the one or more non-EP sensor data due to motion of the user may be reduced based at least in part on at least one of filtering, adaptive filtering, independent component analysis, principal component analysis, blind source separation analysis, or machine learning -based noise filtering, and/or the like.
  • the machine learning -based noise filtering may be based on one of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like.
  • the motion of the user may comprise at least one of micro motions of the user or macro motions of the user, and/or the like.
  • the headband portion may further comprise one or more straps that may be configured to tighten the headband portion around a head of the user in a closed band, wherein sensor signal variances of at least one of the first EP sensor data or the one or more non-EP sensor data due to loose fit of the headband portion around the head of the user compared with a tight fit of the headband portion around the head of the user is compensated based at least in part on at least one of: one or more fourth algorithms configured to monitor signal quality in the at least one of the first EP sensor data or the one or more non-EP sensor data; one or more fifth algorithms configured to reduce noise in the at least one of the first EP sensor data or the one or more non-EP sensor data; placement of the first electrode the first portion of the headband portion, wherein the first portion may be determined to result in sensor signal variances regardless of loose fit or tight fit of the headband portion around the head of the user; or formfactor of the first electrode that is configured to provide contact with skin on the head of the user regardless of loose fit
  • the signal processing of the received first EP sensor data may comprise multimodal processing comprising at least one of realtime processing (e.g., within one millisecond), near-real-time processing, online processing, offline processing, on-microcontroller-unit ("on-MCU”) processing, on- user-device processing, or on-server processing, and/or the like.
  • the individual and correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data may comprise multimodal analysis comprising at least one of real-time analysis, near-real-time analysis, online analysis, offline analysis, on-microcontroller-unit (“on-MCU”) analysis, on-user-device analysis, or on-server analysis, and/or the like.
  • the method may further comprise activating, using the computing system, at least one stimulation device disposed on one or more third portions of the headband portion, each stimulation device comprising one of an electrical stimulation device, a vibration-based stimulation device, an audio-based stimulation device, or a light-based stimulation device, and/or the like.
  • Each stimulation device may be configured to stimulate a physiological response in the user when activated.
  • activating the at least one stimulation device may be performed after determining that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user.
  • the method may also comprise receiving, using the computing system, updated first EP sensor data from the first electrode; applying, using the computing system, signal processing to the received updated first EP sensor data to decompose updated mixed signal data into updated two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; analyzing, using the computing system, at least one of the updated two or more decomposed, distinct sensor signal data each individually; analyzing, using the computing system, at least one of the updated two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other updated decomposed, distinct sensor signal data or one or more updated non-EP sensor data; determining, using the computing system, whether and to what extent the perceived at least one biological and/or psychological state or condition of the user has changed; and sending, using the computing system, data regarding any changes to the perceived at least one biological and/or psychological state or condition of the user to the at least one user device; and/or the like.
  • a headwear-based biosensor system may comprise: a first portion; a first electrode disposed on the first portion; at least one processor; and a non-transitory computer readable medium communicatively coupled to the at least one processor.
  • the non-transitory computer readable medium may have stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the at least one processor to: receive first electrophysiological (“EP”) sensor data from the first electrode, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography (“EEG”) sensor, an electrooculography (EOG) sensor, an electromyography (“EMG”) sensor, or an electrocardiography (“ECG”) sensor, and/or the like; apply signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; analyze at least one of the two or more decomposed, distinct sensor signal data each individually to perceive at least one biological and/or psychological state or condition of a user who was wearing the headwear
  • the headwear-based biosensor system may be one of a stand-alone electronic device having a headband form factor, affixed to one or more portions of an inner surface of an article of headwear, removably attachable to the one or more portions of the inner surface of the article of headwear, or integrated within the inner surface of the article of headwear, and/or the like.
  • the article of headwear may comprise one of a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR”) headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • a headband a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR”) headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like.
  • the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one or more other designated entities, and/or the like.
  • a method may comprise receiving, using a computing system, first electrophysiological (“EP”) sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography (“EEG”) sensor, an electrooculography (EOG) sensor, an electromyography (“EMG”) sensor, or an electrocardiography (“ECG”) sensor, and/or the like.
  • EEG electroencephalography
  • EEG electrooculography
  • EMG electromyography
  • ECG electrocardiography
  • the method may also comprise analyzing, using the computing system, the first mixed signal data to perceive at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data was collected, wherein the biological state or condition comprises at least one of physiological, neurological, or cognitive state or condition.
  • the method may further comprise analyzing, using the computing system, the first mixed signal data in a correlated manner with one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading; and, based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, sending, using the computing system, data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device.
  • the computing system may comprise at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of the headband-based biosensor system, a processor of one or more user devices among the at least one user device, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface (“Ul”) device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like.
  • Ul dedicated user interface
  • the method may further comprise receiving, using the computing system, second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode, the received second EP sensor data from each of the one or more second electrodes comprising one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes; and analyzing, using the computing system, the second mixed signal data from each second electrode, wherein perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on analysis of the first mixed signal data and based at least in part on analysis of the second mixed signal data from each second electrode.
  • Figs. 1-7 illustrate some of the features of the system and apparatus for implementing a headband-based electronics device, and, more particularly, to systems and apparatuses for implementing a headband with biosensor data monitoring including, but not limited to, a headband-based biosensor system and a bone conduction speaker system, as referred to above.
  • the methods, systems, and apparatuses illustrated by Figs. 1-7 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments.
  • the description of the illustrated methods, systems, and apparatuses shown in Figs. 1-7 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.
  • FIGs. 1A and IB are schematic diagrams illustrating various non-limiting examples 100 and 100' of a system for implementing a headband with biosensor data monitoring, in accordance with various embodiments.
  • system 100 may comprise a headband-based biosensor system 105, which may include a headband portion 110.
  • the headband portion 110 may be configured to wrap around at least a portion of a head 155a of a user 155 when the headband-based biosensor system 105 is worn by the user 155.
  • the headband portion 110 may include, without limitation, one or more bone conduction speaker assemblies 115a-115n (collectively, "bone conduction speaker assemblies 115" or the like), at least one processor 135, at least one wireless transceiver 140, one or more sensors 145, and one or more stimulation devices 150 (optional), and/or the like.
  • system 100 may further comprise one or more servers 135', one or more user devices 170a-170n (collectively, “user devices 170” or the like), user device(s) 175, and media content server(s) 180 and corresponding database(s) 185, or the like.
  • Headband-based speaker system 105 may communicatively couple with each of the one or more user devices 170 via the at least one wireless transceiver 140 (as depicted in Fig.
  • user device(s) 170 may communicatively couple with the server 135', the user device(s) 175, and/or the media content server(s) 180 (and corresponding database(s) 185) via network(s) 190 (as depicted in Fig. 1 by the lightning bolt symbol(s) between network(s) 190 and each user device 170).
  • the at least one transceiver 140 is capable of communicating using protocols including, but not limited to, at least one of Bluetootha communications protocol, Wi-Fi communications protocol, or other 802.11 suite of communications protocols, ZigBee communications protocol, Z wave communications protocol, or other 802.15.4 suite of communications protocols, cellular communications protocol (e.g., 3G, 4G, 4G LTE, 5G, etc.), or other suitable communications protocols, and/or the like.
  • protocols including, but not limited to, at least one of Bluetootha communications protocol, Wi-Fi communications protocol, or other 802.11 suite of communications protocols, ZigBee communications protocol, Z wave communications protocol, or other 802.15.4 suite of communications protocols, cellular communications protocol (e.g., 3G, 4G, 4G LTE, 5G, etc.), or other suitable communications protocols, and/or the like.
  • network(s) 190 may each include, without limitation, one of a local area network (“LAN”), including, without limitation, a fiber network, an Ethernet network, a Token-Ringa network, and/or the like; a wide-area network (“WAN”); a wireless wide area network (“WWAN”); a virtual network, such as a virtual private network (“VPN”); the Internet; an intranet; an extranet; a public switched telephone network (“PSTN”); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetootha protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks.
  • LAN local area network
  • WAN wide-area network
  • WWAN wireless wide area network
  • VPN virtual network, such as a virtual private network
  • PSTN public switched telephone network
  • PSTN public switched telephone network
  • wireless network including, without limitation, a network operating under any of the IEEE 802.11 suite of
  • the network(s) 190 may include an access network of the service provider (e.g., an Internet service provider ("ISP")). In another embodiment, the network(s) 190 may include a core network of the service provider and/or the Internet.
  • ISP Internet service provider
  • the one or more bone conduction speaker assemblies 115 may be disposed on corresponding one or more first portions of an inner surface of the headband portion 110.
  • Each bone conduction speaker assembly 115 may include, but is not limited to, a bone conduction speaker device 120 and a deformable speaker housing 125a.
  • the bone conduction speaker device 120 may include, without limitation, a vibration plate 120a and a transducer 120b.
  • the vibration plate 120a may include, but is not limited to, a proximal portion facing the head of the user when the headband-based biosensor system 105 is worn by the user 155 and a distal portion.
  • the transducer 120b may include, without limitation, a proximal portion, one or more side portions, and a distal portion facing the inner surface of the headband portion 110, the proximal portion of the transducer 120b being mechanically coupled to the distal portion of the vibration plate 120a.
  • the deformable speaker housing 125a may enclose a portion of the transducer 120b with an air gap 125b between an interior surface of the deformable speaker housing 125a and each of the one or more side portions and the distal portion of the transducer 120b (referred to herein as "the hanging chamber” speaker design, or the like).
  • the deformable speaker housing 125a may include, but is not limited to, a deformable material configured to compress toward the transducer 120b within the air gap 125b when the headband portion 110 is pressed up against the head 155a of the user 155 when the headband-based biosensor system 105 is worn by the user 155, without the pressed-up headband portion 110 causing a shift in alignment of the corresponding vibration plate 120a relative to the head 155a of the user 155.
  • each bone conduction speaker assembly 115 may further include a padding material 130 disposed between the vibration plate 120a and the head 155a of the user 155 when the headband-based biosensor system 105 is worn by the user 155.
  • the headband portion 110 may be made of one or more materials including, without limitation, at least one of polyurethane, thermoplastic polyurethane ("TPU”), silicone, or polycarbonate (“PC”), and/or the like.
  • the padding material 130 may be made of one or more materials including, but not limited to, at least one of polyurethane, TPU, silicone, or PC, and/or the like.
  • the material of the padding material 130 may be designed to have a lower hardness rating (e.g., TPU with hardness of 35-40 Shore A, or the like, although not limited to such) compared with that of the material for the headband portion (e.g., TPU with hardness of 60 Shore A, or the like, although not limited to such).
  • the deformable material of the deformable speaker housing 125a may be made of one or more materials including, without limitation, at least one of polyamide (“PA”) or acrylonitrile butadiene styrene (“ABS”), and/or the like.
  • each bone conduction speaker device 120 may include, but is not limited to, one of a single-element speaker or a multi-element speaker.
  • the multi-element speaker may be configured for adjusting at least one of phase modulation, phase cancellation, or beating effects, and/or the like.
  • the one or more bone conduction speaker assemblies 110 may be part of a stereo speaker system including, but not limited to, one of a stereo 2.0 channel speaker system, a stereo 2.1 channel speaker system, a stereo 5.1 channel speaker system, or a stereo 7.1 channel speaker system, and/or the like, in which case the processor(s) 135 may utilize synchronization algorithms among the speaker components to enable optimal surround sound, or the like.
  • each transducer 120b may include a cross-sectional shape including, without limitation, one of an ellipse, a circle, a rectangle, or other polygon, and/or the like.
  • the headband portion 110 may further include one or more straps (shown in Figs.
  • each vibration plate 120a and corresponding one of the one or more first portions of the inner surface of the headband portion 110 may align with one of parietal bone, temporal bone, sphenoid bone, or frontal bone, and/or the like, of the head 155a of the user 155, while minimizing pressure contact with blood vessels and nerves on the head 155a of the user 155.
  • the headband portion 110 may be wrapped around a forehead of the user 155 and above both ears of the user 155.
  • the headband-based biosensor system 105 may further include a pair of ear coverings 160.
  • Each ear covering 160 may be attachable to the headband portion 110 and may be configured to cover an ear of the user 155 when the headband-based biosensor system 105 is worn by the user 155.
  • each ear covering 160 may be one of removably attachable to the headband portion 110, permanently attachable to the headband portion 110, or integrated with the headband portion 110, and/or the like.
  • each ear covering 160 may attach to the headband portion 110 via one of one or more sewed on threads, glue or other adhesive, one or more hook-and-loop fasteners (e.g., Velcro®, or the like), one or more button-based fasteners, one or more magnetic fasteners, one or more wire fasteners, one or more wire clasps, one or more hinges, one or more screws, one or more clips, or other fasteners, and/or the like.
  • hook-and-loop fasteners e.g., Velcro®, or the like
  • the pair of ear coverings 160 may be made of one or more materials, including, but not limited to, at least one of cloth, foam, polyurethane, thermoplastic polyurethane ("TPU”), silicone, polycarbonate (“PC”), polyamide (“PA”), or acrylonitrile butadiene styrene (“ABS”), and/or the like.
  • each ear covering 160 may further include, without limitation, at least one of a sound isolation material, a passive noise cancellation device, or an active noise cancellation device, and/or the like (collectively, "sound isolation and/or cancellation material 160a” or “sound isolation and/or cancellation device 160a” or the like).
  • the headband-based biosensor system 105 may further include, but is not limited to, one or more acoustic speakers 160b, each disposed in one of a surface portion of the headband portion 110 or a portion of one of the pair of ear coverings 160.
  • the acoustic speakers 160b may be configured to be directional such that sound is focused toward the opening of the ear(s) of the user, without the acoustic speakers being placed in the ear(s) (as this results in undesired pressure contact within and on the ear of the user when pressed up against the user (such as when sleeping or laying on their head against a pillow, cushion, or other surface), or the like).
  • directional sound may be achieved using micro-speaker arrays or the like that are configured to generate sound fields of desired directionality and shape, or the like.
  • each acoustic speaker 160b may have a form factor that is configured to reduce contact pressure on the user 155 when pressed up against the head 155a of the user 155.
  • the one of the surface portion of the headband portion 110 or the portion of one of the pair of ear coverings 160 may be selected to minimize pressure contact between each speaker 120 and blood vessels and nerves on the head 155a of the user 155 when the headband-based biosensor system 105 is worn by the user 155 and when a portion of the headband portion 110 corresponding to the one of the surface portion of the headband portion 110 or the portion of one of the pair of ear coverings 160 is being pressed up against the head 155a of the user 155.
  • the headband-based biosensor system 105 may further include, but is not limited to, at least one eye covering 165 including a pair of eye coverings 165 each configured to cover an eye of the user 155 or a single eye covering 165 configured to cover both eyes of the user 155, each eye covering 165 being attachable to the headband portion 110.
  • the at least one eye covering 165 may be one of removably attachable to the headband portion 110, permanently attachable to the headband portion 110, or integrated with the headband portion 110, and/or the like.
  • the at least one eye covering 165 may attach to the headband portion 110 via one of one or more sewed on threads, glue or other adhesive, one or more hook-and-loop fasteners (e.g., Velcro®, or the like), one or more button-based fasteners, one or more magnetic fasteners, one or more wire fasteners, one or more wire clasps, one or more hinges, one or more screws, one or more clips, or other fasteners, and/or the like.
  • hook-and-loop fasteners e.g., Velcro®, or the like
  • the at least one eye covering 165 may be made of one or more materials including, without limitation, at least one of cloth, foam, polyurethane, thermoplastic polyurethane (“TPU”), silicone, polycarbonate (“PC”), polyamide (“PA”), or acrylonitrile butadiene styrene (“ABS”), and/or the like.
  • TPU thermoplastic polyurethane
  • PC polycarbonate
  • PA polyamide
  • ABS acrylonitrile butadiene styrene
  • the at least one eye covering 165 may further include a display device(s) 165a, including, but not limited to, at least one of a head-mounted flexible display device, a head- mounted micro-projector display device, a head-mounted flexible semi-transparent display device, a virtual reality (“VR”) display device, an augmented reality (“AR”) display device, or a mixed reality (“MR”) display device, and/or the like.
  • a display device(s) 165a including, but not limited to, at least one of a head-mounted flexible display device, a head- mounted micro-projector display device, a head-mounted flexible semi-transparent display device, a virtual reality (“VR”) display device, an augmented reality (“AR”) display device, or a mixed reality (“MR”) display device, and/or the like.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • the headband-based biosensor system 105 may include, without limitation, one or more sensors 145 disposed within the headband portion 110.
  • the one or more sensors 145 may include one or more electrophysiological (“EP”) sensors 145a, which may each include, without limitation, at least one of an electroencephalography (“EEG”) sensor 145b, an electrooculography (EOG) sensor 145c, an electromyography (“EMG”) sensor 145d, an electrocardiography (“ECG”) sensor 145e, and/or the like.
  • EEG electroencephalography
  • EOG electrooculography
  • EMG electromyography
  • ECG electrocardiography
  • the one or more sensors 145 may each further include at least one of a photoplethysmography (“PPG”) sensor 145g, an inertial measurement unit (“IMU”) sensor 145h, or one or more other sensors 145 i, and/or the like (collectively, “non-EP sensors” or the like).
  • the one or more other sensors 145i may each include, but is not limited to, at least one of an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the like.
  • the headband-based biosensor system 105 may further include, but is not limited to, at least one stimulation device 150 disposed on one or more second portions of the inner surface of the headband portion 110, each stimulation device 150 may include, without limitation, one of an electrical stimulation device, a vibration-based stimulation device, an audio-based stimulation device, or a light-based stimulation device, and/or the like, and each stimulation device 150 may be configured to stimulate a physiological response in the user 155 when activated.
  • the headband-based biosensor system 105 may further include, but is not limited to, the at least one wireless transceiver 140 disposed within the headband portion 110; and the at least one processor 135 disposed within the headband portion 110.
  • the at least one processor 135 may be configured to: receive, via the at least one wireless transceiver 140, wireless audio signals from a user device (e.g., user device(s) 170a-170n and/or 175 or media content server(s) via user device(s) 170a-170n and/or 175, or the like) that is external to, and separate from, the headband-based biosensor system 105; and control playback of audio content through each bone conduction speaker 120 that is housed within each of the one or more bone conduction speaker assemblies 115, based on the received wireless audio signals.
  • a user device e.g., user device(s) 170a-170n and/or 175 or media content server(s) via user device(s) 170a-170n and/or 175, or the like
  • control playback of audio content through each bone conduction speaker 120 that is housed within each of the one or more bone conduction speaker assemblies 115, based on the received wireless audio signals.
  • a bone conduction speaker system may comprise one or more bone conduction speaker assemblies (similar to bone conduction speaker assemblies 115a-115n, or the like) disposed on corresponding one or more first portions of an inner surface of an article of headwear (not shown in Fig. 1).
  • Each bone conduction speaker assembly may include, without limitation, a bone conduction speaker device (similar to bone conduction speaker device 120, or the like) and a deformable speaker housing (similar to deformable speaker housing 125a, or the like).
  • the bone conduction speaker device may include, but is not limited to, a vibration plate (similar to vibration plate 120a, or the like) and a transducer (e.g., transducer 120b, or the like).
  • the vibration plate may include, but is not limited to, a proximal portion facing the head (similar to head 155a, or the like) of the user (similar to user 155, or the like) when the article of headwear is worn by the user and a distal portion.
  • the transducer may include, without limitation, a proximal portion, one or more side portions, and a distal portion facing the inner surface of the article of headwear. The proximal portion of the transducer may be mechanically coupled to the distal portion of the vibration plate.
  • the deformable speaker housing may enclose a portion of the transducer with an air gap (similar to air gap 125b, or the like) between an interior surface of the deformable speaker housing and each of the one or more side portions and the distal portion of the transducer.
  • the deformable speaker housing may include, but is not limited to, a deformable material configured to compress toward the transducer within the air gap when the article of headwear is pressed up against the head of the user when the article of headwear is worn by the user without the pressed-up article of headwear causing a shift in alignment of the corresponding vibration plate relative to the head of the user.
  • the article of headwear may include, without limitation, one of a headband-based biosensor system, a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR") headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • each bone conduction speaker assembly may be one of affixed to, removably attachable to, or integrated with the inner surface of the article of headwear.
  • each transducer may include, but is not limited to, a cross-sectional shape comprising one of an ellipse, a circle, a rectangle, or other polygon, and/or the like.
  • each bone conduction speaker assembly may further include a padding material (similar to padding material 130, or the like) disposed between the vibration plate and the head of the user when the article of headwear is worn by the user.
  • the padding material may be made of one or more materials including, but not limited to, at least one of polyurethane, thermoplastic polyurethane ("TPU”), silicone, or polycarbonate (“PC”), and/or the like.
  • the deformable material of the deformable speaker housing may be made of one or more materials including, without limitation, at least one of polyamide (“PA”) or acrylonitrile butadiene styrene (“ABS”), and/or the like.
  • PA polyamide
  • ABS acrylonitrile butadiene styrene
  • the bone conduction speaker system may further include, but is not limited to, at least one wireless transceiver (similar to wireless transceiver(s) 140, or the like); and at least one processor (similar to processor(s) 135, or the like) communicatively coupled with the at least one wireless transceiver and each of the one or more bone conduction speaker assemblies.
  • at least one wireless transceiver similar to wireless transceiver(s) 140, or the like
  • processor similar to processor(s) 135, or the like
  • the at least one processor may be configured to: receive, via the at least one wireless transceiver, wireless audio signals from a user device (e.g., user device(s) 170a-170n and/or 175, or the like) that is external to, and separate from, the bone conduction speaker system; and control playback of audio content through each bone conduction speaker that is housed within each of the one or more bone conduction speaker assemblies, based on the received wireless audio signals.
  • a user device e.g., user device(s) 170a-170n and/or 175, or the like
  • the at least one processor 135, the server 135', or other computing systems may receive first EP sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode including first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors 145a, or the like.
  • the other computing systems may include, but are not limited to, at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of one or more user devices among the at least one user device, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • the computing system may apply signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; may analyze at least one of the two or more decomposed, distinct sensor signal data each individually to perceive at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data was collected; and may analyze the at least one of the two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other decomposed, distinct sensor signal data or one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading.
  • biological state or condition may include, without limitation, at least one of physiological, neurological, or cognitive state or condition, and/or the like.
  • the computing system may send data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device (e.g., user device(s) 170a- 170n and/or 175, via transceiver(s) 140 and/or network(s) 190, or the like).
  • the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like.
  • the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one or more other designated entities, and/or the like.
  • the headband portion may be wrapped around a forehead of the user and above both ears of the user, or the like.
  • the computing system may receive first non-EP sensor data from each of one or more first non-EP sensors (e.g., PPG sensor 145g, IMU sensor 145h, and/or other sensor(s) 145i, or the like), the received first non-EP sensor data comprising the one or more non-EP sensor data; may analyze the received first non-EP sensor data individually, where perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on individual analysis of the at least one of the two or more decomposed, distinct sensor signal data and based at least in part on individual analysis of the first non-EP sensor data.
  • first non-EP sensors e.g., PPG sensor 145g, IMU sensor 145h, and/or other sensor(s) 145i, or the like
  • the one or more first non-EP sensors may include, without limitation, at least one of one or more non-EP sensors that are disposed within or on the headband-based biosensor system 105 or one or more non-EP sensors that are disposed external, yet communicatively coupled, to the headband-based biosensor system 105.
  • perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving the at least one biological and/or psychological state or condition of the user based at least in part on at least one of: one or more detected events comprising at least one of one or more onetime events, one or more recurring events, one or more short duration events, one or more long duration events, one or more instantaneous events, or two or more concurrent events, and/or the like, the one or more detected events corresponding to at least one of time, frequency, time-frequency, or latent representations, and/or the like, of the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or a combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like; one or more identified patterns in the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the
  • the biological data of each user may include, but is not limited to, the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like.
  • the computing system may receive second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode, the received second EP sensor data from each of the one or more second electrodes comprising one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes.
  • the computing system may apply signal processing to the received second EP sensor data to decompose the second mixed signal data from each second electrode into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; and may individually analyze the two or more decomposed, distinct sensor signal data corresponding to each second electrode, where perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on individual analysis of the at least one of the two or more decomposed, distinct sensor signal data and based at least in part on individual analysis of the two or more decomposed, distinct sensor signal data corresponding to each second electrode.
  • “individually analyzing two or more signals or data” refers to analyzing each of the two or more signals or data in an individual manner.
  • high fidelity of at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like, regardless of motion of the user, orientation of a head of the user, or the headbandbased biosensor system being pressed up against the head of the user may be achieved based at least in part on at least one of: one or more first algorithms configured to evaluate channel quality of at least one of a first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more second algorithms configured to perform at least one of selecting, referencing, or rejecting one or more portions or components of at least one of the first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more third algorithms configured to reduce or suppress at least one of signal artifacts or signal noise in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like; hardware-based a
  • At least one of the one or more first algorithms, the one or more second algorithms, the one or more third algorithms, one or more first models corresponding to the one or more first algorithms, one or more second models corresponding to the one or more second algorithms, or one or more third models corresponding to the one or more third algorithms, and/or the like may be at least one of developed or updated using at least one of supervised machine learning, unsupervised machine learning, semi-supervised machine learning, self-supervised machine learning, reinforcement-based machine learning, statistical modeling, heuristic-based machine learning, or rule-based machine learning, and/or the like.
  • any sensor signal noise in at least one of the first EP sensor data or the one or more non-EP sensor data due to motion of the user may be reduced based at least in part on at least one of filtering, adaptive filtering, independent component analysis, principal component analysis, blind source separation analysis, or machine learning -based noise filtering, and/or the like.
  • the machine learning -based noise filtering may be based on one of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like.
  • the motion of the user may include, but is not limited to, at least one of micro motions of the user (including, without limitation, motion due to breathing, motion due to snoring, eye movement (when the user is awake or during REM sleep, etc.), motion due to heart beating, etc.) or macro motions of the user (including, without limitation, walking, running, or performing other physical activities, movement during sleep, etc.) and/or the like.
  • micro motions of the user including, without limitation, motion due to breathing, motion due to snoring, eye movement (when the user is awake or during REM sleep, etc.), motion due to heart beating, etc.
  • macro motions of the user including, without limitation, walking, running, or performing other physical activities, movement during sleep, etc.
  • the headband portion may further comprise one or more straps (as shown in Figs. 2A and 2C, or the like, although not limited to the type of straps as shown) that may be configured to tighten the headband portion around a head of the user in a closed band.
  • sensor signal variances of at least one of the first EP sensor data or the one or more non-EP sensor data due to loose fit of the headband portion around the head of the user compared with a tight fit of the headband portion around the head of the user may be compensated based at least in part on at least one of: one or more fourth algorithms configured to monitor signal quality in the at least one of the first EP sensor data or the one or more non- EP sensor data; one or more fifth algorithms configured to reduce noise in the at least one of the first EP sensor data or the one or more non-EP sensor data; placement of the first electrode the first portion of the headband portion, where the first portion may be determined to result in sensor signal variances regardless of loose fit or tight fit of the headband portion around the head of the user; or formfactor of the first electrode that is configured to provide contact with skin on the head of the user regardless of loose fit or tight fit of the headband portion around the head of the user; and/or the like.
  • the signal processing of the received first EP sensor data may comprise multimodal processing comprising at least one of realtime processing, near-real-time processing, online processing, offline processing, on- microcontroller-unit ("on-MCU”) processing, on-user-device processing, or on-server processing, and/or the like.
  • the individual and correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data may comprise multimodal analysis comprising at least one of real-time analysis, near- real-time analysis, online analysis, offline analysis, on-microcontroller-unit ("on- MCU”) analysis, on-user-device analysis, or on-server analysis, and/or the like.
  • the computing system may activate at least one stimulation device (e.g., stimulation device(s) 150, or the like) that is disposed on one or more third portions of the headband portion.
  • activating the at least one stimulation device may be performed after determining that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user.
  • the computing system may receive updated first EP sensor data from the first electrode; may apply signal processing to the received updated first EP sensor data to decompose updated mixed signal data into updated two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; may analyze at least one of the updated two or more decomposed, distinct sensor signal data each individually; may analyze at least one of the updated two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other updated decomposed, distinct sensor signal data or one or more updated non-EP sensor data; may determine whether and to what extent the perceived at least one biological and/or psychological state or condition of the user has changed; and may send data regarding any changes to the perceived at least one biological and/or psychological state or condition of the user to the at least one user device; and/or the like.
  • analysis may be performed on the mixed signal data corresponding to each electrode.
  • algorithms, machine learning approaches, and/or learning models may be used to facilitate as well as enhance results of analysis based on the mixed signal data.
  • perception of at least one biological and/or psychological state or condition of the user may be performed based on correlated analysis of the EP sensor data (regardless of whether decomposed or mixed signal data is used) and the non-EP sensor data, rather than based on individual analysis of each type of data.
  • Fig. 1 is directed to a headband-based biosensor system
  • the various embodiments are not so limited, and a headwear-based biosensor system having similar functionality and at least some of the components of the abovedescribed headband-based biosensor system may be used in a similar manner as described with respect to the headband-based biosensor system.
  • the headwear-based biosensor system may be one of a stand-alone electronic device having a headband form factor, affixed to one or more portions of an inner surface of an article of headwear, removably attachable to the one or more portions of the inner surface of the article of headwear, or integrated within the inner surface of the article of headwear, and/or the like.
  • the article of headwear may include, but is not limited to, one of a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR”) headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • a headband a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR”) headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • sensor data from at least one of EP sensor(s) 145a, PPG sensor(s) 145g, IMU sensor(s) 145h, microphone(s) 145j, and/or other sensor(s) 145i, or the like may be processed and/or analyzed by processor 135", in some cases, using digital signal processor 135a", and, in some instances, using inference algorithms 135b" (similar to first through fifth algorithms as described above), which may be part of the code that is stored in memory 135c", or the like.
  • the sensor data may be processed through one or more hardware-based signal quality improvement mechanisms 195, including, but not limited to, hardware-based amplification circuits (including, but not limited to, operational amplifier (“op-amp”) circuits, or the like) to amplify signal data in the sensor data to improve at least one of signal quality or signal noise suppression, and/or the like.
  • processor 135" may be at least one of local processing 135 (with DSP 135a and inference algorithms 135b, or the like) on a local device (e.g., either headband-based biosensor system 105 and/or user device 170 among user devices 170a-170n of Fig.
  • example 100' of Fig. IB are otherwise similar, if not identical, to the corresponding components, features, and functionalities of example 100 of Fig. 1A, the description of such corresponding components, features, and functionalities of example 100 of Fig. 1A are applicable to the components, features, and functionalities of example 100' of Fig. IB.
  • FIGs. 2A-2H are schematic diagrams illustrating various non-limiting examples 200 and 200' of a headband with biosensor data monitoring, in accordance with various embodiments.
  • a headbandbased biosensor system 205 (similar to headband-based biosensor system 105 of Fig. 1, or the like) may include a headband portion 210 (similar to headband portion 110 of Fig. 1, or the like).
  • Headband portion 210 may include, without limitation, a front cover 210a (also referred to as “outer surface” or the like), a back cover 210b (also referred to as “inner surface” or the like), a backstrap 210c (also referred to as “strap(s)” or the like), a power button 210d, one or more volume buttons 210e, or a cable port 210f (which may include, but is not limited to, a USB port or other data and/or power supply port, or the like), and/or the like, headband-based biosensor system 205 may further include, without limitation, one or more bone conduction speakers or speaker assemblies 215 (similar to bone conduction speaker assemblies 115a-115n of Fig.
  • headband-based biosensor system 205 may further include one or more sensors 245, including, but not limited to, one or more spider electrodes 245a and one or more behind-the-ear electrodes 245b, or the like.
  • the one or more spider electrodes 245a which may be made of a flexible conductive material (including, without limitation, conductive silicone, or the like), may be configured to extend through any head hair of a user wearing the headband-based biosensor system 205 to contact skin on the head of the user, and may be configured to provide optimal conductivity (through said head hair and with the contacted skin) while providing comfort and stability.
  • the one or more spider electrodes 245a may further be configured to achieve quick stabilization times (e.g., on the order of less than 1 minute stabilization) of signal detection.
  • the one or more behind-the-ear electrodes 245b may be configured to make contact with the skin on the head of the user that is, as the name suggests, behind the ear(s) of the user.
  • each of the one or more spider electrodes 245a and the one or more behind-the-ear electrodes 245b may be further configured to monitor, track, and/or collect biosensor data of the user, the biosensor data including, but not limited to, at least one of electroencephalography (“EEG”) sensor data, electrooculography (EOG) sensor data, electromyography (“EMG”) sensor data, electrocardiography (“ECG”) sensor data, or photoplethysmography (“PPG”) sensor data, and/or the like.
  • EEG electroencephalography
  • EOG electrooculography
  • EMG electromyography
  • ECG electrocardiography
  • PPG photoplethysmography
  • the one or more sensors 245 may further include, without limitation, at least one of one or more forehead sensors disposed on the back cover 210b (and positioned to align with the forehead of the user when the headband-based biosensor system is worn by the user), one or more motion and/or orientation - based sensors disposed within the headband portion 210, or one or more other sensors disposed within and/or on a surface of the headband 210, and/or the like.
  • the one or more forehead sensors (similar to sensors or electrodes 245c to 245g of Figs.
  • 2D-2H, or the like may also be configured to monitor, track, and/or collect biosensor data of the user (including, but not limited to, at least one of EEG sensor data, the EOG sensor data, or the EMG sensor data, and/or the like).
  • electrode may refer to an electrically conductive contact surface that collects electrophysiological signals from the user (in this case, from the head of the user), while “sensor” may refer to one of the sensors 145a-145j in Fig. 1, or the like. In the case that only one sensor is communicatively coupled to one electrode, “electrode” and “sensor” are synonymous and interchangeable.
  • each “sensor” should be individually referenced, while the “electrode” refers to an electrically conductive contact surface that collects and mixes (or superimposes) the individual sensor signals from each of the communicatively coupled "sensors.”
  • the raw data collected by each of these sensors 245 may contain all these biosensor data mixed on top of each other.
  • a processor(s) (similar to processor(s) 135 of Fig. 1, or the like) of the headband-based biosensor system 205 and/or the processor(s) of an external device (e.g., user device(s) 170a-170n and/or 175 of Fig. 1, or the like) may apply algorithms for filtering, for data cleaning, and/or for otherwise signal processing the collected raw data (which conventional devices would treat as noise signals) to extract (and in some cases, amplify) individual signals each corresponding to the at least one of the EEG sensor data, the EOG sensor data, or the EMG sensor data, and/or the like.
  • the one or more motion and/or orientation -based sensors may include, but are not limited to, at least one of one or more inertial measurement unit (“IMU") sensors, one or more accelerometers, or one or more gyroscopes, and/or the like.
  • IMU inertial measurement unit
  • the one or more motion and/or orientation -based sensors may be configured to track motion and/or relative motion as well as orientation of the headband-based biosensor system, and thus the head of the user.
  • Correlation of data associated with motion and/or orientation of the head of the user (via motion and/or orientation of the headband-based biosensor system) with the biosensor data of the user (collected via the biosensors) [collectively, "correlated data" or the like] may be useful in monitoring the health and physiological status of the user.
  • audio signals that are output through the bone conduction speakers and/or the acoustic speakers are intended for therapy purposes for the user such audio signals may be modulated and/or adjusted in response to at least one of the biosensor data, the data associated with motion and/or orientation of the head of the user, and/or the correlated data.
  • data associated with motion and/or orientation of the headband-based biosensor system may be used to suppress noise in the biosensor data and/or noise effects in the audio signals (regardless of whether the audio signals are therapybased audio signals, music, voice signals (such as voice signals for a telephone call, voice over Internet protocol (“VoIP”) call, a voice chat via software application (“app”) chat, or an audio book, and/or the like), or other audio signals (e.g., sounds of nature, white-noise sounds, etc.), and/or the like).
  • voice signals such as voice signals for a telephone call, voice over Internet protocol (“VoIP”) call, a voice chat via software application (“app”) chat, or an audio book, and/or the like
  • voice signals such as voice signals for a telephone call, voice over Internet protocol (“VoIP") call, a voice chat via software application (“app") chat, or an audio book, and/or the like
  • other audio signals e.g., sounds of nature, white-noise sounds, etc.
  • the one or more other sensors may include, without limitation, at least one of a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the like.
  • the sound sensor and/or the microphone may be configured to monitor breathing sounds of the user, other head-based sounds of the user, and/or ambient sounds. In some instances, the breathing sounds, the other head-based sounds, and/or the ambient sounds may be used as inputs for health monitoring and/or therapy purposes for the user.
  • the breathing sounds, the other head-based sounds, and/or the ambient sounds may be used to suppress noise effects in the audio signals (regardless of whether the audio signals are therapy-based audio signals, music, voice signals (such as voice signals for a telephone call, voice over Internet protocol ("VoIP") call, a voice chat via software application (“app") chat, or an audio book, and/or the like), or other audio signals (e.g., sounds of nature, white-noise sounds, etc.), and/or the like).
  • the microphone may also be used as an input device for voice control and communication.
  • the temperature sensor may be configured to monitor either temperature of the user or ambient temperature, or both, and resultant temperature data may be used for health monitoring and/or therapy purposes for the user.
  • the moisture sensor may be configured to ambient moisture levels around the user (e.g., around the headband-based speaker system, etc.) and/or sweat levels of the user, while the sweat sensor may be configured to monitor sweat levels of the user and/or salinity of the sweat of the user.
  • the oximeter may be configured to monitor blood oxygen levels of the user, while the heart rate sensor may be configured to measure the heart rate or pulse of the user, and a pulse oximeter may be configured to perform both functions.
  • the blood pressure sensor may be configured to measure blood pressure of the user
  • the light sensor may be configured to monitor ambient light conditions, and resultant light sensor data may be used for health monitoring and/or therapy purposes for the user, and/or (in the case that eye covering(s) with display device is used (such as described above with respect to Fig. 1, or the like) may be used to adjust brightness, contrast, or other display characteristics of the display device of the eye covering(s) to counter effects of the monitored ambient light conditions, or the like.
  • the light sensor may also be used to control and modulate light stimulation implemented by the eye covering(s) with display device (such as described above with respect to Fig. 1, or the like), or the like.
  • the power button 210d may be configured to initiate one or more functions. For example, holding the power button 210d for longer than a second (e.g., 1-3 seconds, or the like) may cause the headband-based biosensor system 205 to switch between a powered on state or a powered off state.
  • a second e.g., 1-3 seconds, or the like
  • pressing and releasing (e.g., clicking) the power button 210d may cause the headband-based biosensor system 205 to change (or cycle through) a plurality of modes (including, but not limited to, one or more audio playback modes, one or more voice call connection modes, one or more stimulation modes, or the like).
  • pressing and releasing (e.g., clicking) both the power button 210d and one of the volume buttons 210e may cause the headband-based biosensor system 205 to change (or cycle through) the plurality of modes in one direction, while pressing and releasing (e.g., clicking) both the power button 210d and the other of the volume buttons 210e may cause the headband-based biosensor system 205 to change (or cycle through) the plurality of modes in the other direction.
  • pressing and releasing (e.g., clicking) one of the volume buttons 210e may cause the headbandbased biosensor system 205 to increase perceived volume of the bone conduction speakers 215 while pressing and releasing (e.g., clicking) the other of the volume buttons 210e may cause the headband-based biosensor system 205 to decrease perceived volume of the bone conduction speakers 215.
  • holding one of the volume buttons 210e for longer than a second may cause the headband-based biosensor system 205 either to forward through an audio track or to skip the audio track, while holding the other of the volume buttons 210e for longer than a second (e.g., 1-3 seconds, or the like) may cause the headband-based biosensor system 205 either to rewind through an audio track or to skip back to a previous audio track.
  • the various embodiments are not limited to these particular functions and/or actuation mechanisms of the power button 210d and/or the volume buttons 210e, and may include any suitable set, combination, or other actuation sequences involving the power button 210d and/or the volume buttons 210e for actuating or initiating functions of the headband-based biosensor system 205.
  • a headband-based biosensor system 205' (similar to headband-based biosensor system 105 or 205 of Fig. 1 or Figs. 2A-2C, or the like) may include a headband portion 210 (similar to headband portion 110 or 210 of Fig. 1 or Figs. 2A-2C, or the like).
  • Headband portion 210 may include, without limitation, a front cover 210a (also referred to as “outer surface” or the like), a back cover 210b (also referred to as “inner surface” or the like), or a cable port 210f (which may include, but is not limited to, a USB port or other data and/or power supply port, or the like), and/or the like, headband-based biosensor system 205' may further include, without limitation, one or more bone conduction speakers or speaker assemblies 215 (similar to bone conduction speaker assemblies 115a-115n or 215 of Fig. 1 or Figs.
  • headband-based biosensor system 205' may further include one or more sensors 245, including, but not limited to, one or more spider electrodes 245a, one or more behind-the-ear electrodes 245b, a left forehead sensor 245c, a ground electrode 245d, a reference electrode 245e, a photoplethysmography (“PPG") sensor 245f, and/or a right forehead sensor 245g, and/or the like.
  • PPG photoplethysmography
  • each of the sensors 245c, 245d, 245e, and 245g may be configured to make contact with the skin on the forehead of the user.
  • each of the one or more spider electrodes 245a, the one or more behind-the-ear electrodes 245b, the left forehead sensor 245c, and the right forehead sensor 245g are each configured to monitor, track, and/or collect biosensor data of the user (including, but not limited to, at least one of EEG sensor data, the EOG sensor data, or the EMG sensor data, and/or the like), as described in detail above with respect to Figs. 2A- 2C.
  • the ground electrode 245d may be used a stabilization source during signal acquisition, or the like, while the reference electrode 245e may be used (in some cases, as the default case) as the reference source to compare with electrical activity as detected by the behind-the-ear sensor(s) 245b, the left forehead sensor 245c, and/or the right forehead sensor 245g, and/or the like, headband-based biosensor system 205' may further include a microphone 210g.
  • the headband portion may further comprise a flexible printed circuit board ("PCB") substrate(s) disposed between the front cover 210a and the back cover 210b.
  • PCB printed circuit board
  • the processor(s) similar to processor(s) 135 of Fig. 1A, or the like
  • the transceiver(s) similar to transceiver(s) 140 of Fig. 1A, or the like
  • the memory devices similar to memory 135c" of Fig. IB, or the like
  • signal quality improvement systems similar to HW-based signal quality improvement mechanism(s) 195 of Fig.
  • sensors may be disposed on corresponding portions of the flexible PCB substrate(s), which allows flexibility and compression of the headband portion of the headband-based biosensor system without damaging the control circuitry disposed on the flexible PCB substrate(s).
  • sensors similar to sensors 145 and 145a- 145j of Fig.
  • one or more electronically actuated switches may be disposed on the flexible PCB substrate(s) to selectively turn off particular sensors and/or to selectively block signal lines leading to the particular sensors or to selectively cause signal lines to be open-circuited between the electrodes (i.e., contact points or channels, etc.) and the particular sensors (effectively providing for "selective deactivation of sensors” or the like).
  • Many signals - including, but not limited to, brain (or brain wave) signals, eye motion signals, facial muscle signals, head motion signals, head posture signals, and other head-related signals, or the like - are not possible to measure without making direct contact with the head.
  • positions of electrodes are carefully selected so that they can optimally (and comfortably, for the user) capture the Brain, Eye, and Facial muscle signals (e.g., EMG sensor signals, or the like). In some cases, electrode locations may be driven by medical advisory and data driven studies.
  • forehead electrode location(s) enables EEG sensing from the frontal lobe of the brain. Additionally, offset from the center of the forehead enables acquisition of eye movement activity in comparable quality to dedicated EOG electrodes used in clinical studies.
  • Common mode sensing (“CMS") electrode(s) may be placed at the center of the forehead because it is a stable location for the reference electrode as it is a stable contact point when the user is active, laying down, or working, and/or the like. Additionally, hair rarely covers this region, so its contact is infrequently disturbed. Electrodes on the sides of the head enable EEG sensing directly over the temporal lobe. Additionally, their proximity to the mastoid bone enables them to act as an additional source for signal referencing.
  • EEG, EMG, and EOG signals have been compared against those from corresponding FDA approved clinical equipment, which act as sources of ground truth for improvement and optimization of the EP sensors described herein.
  • the position of PPG sensor is carefully selected to provide robust sensing of heart and respiratory data. Documented experiments were performed to evaluate signal quality at different horizontal and vertical positions along the forehead, and the signals were compared against those from corresponding FDA approved clinical equipment, which act as sources of ground truth for improvement and optimization of the PPG sensor.
  • the position of IMU sensor is carefully selected to provide sensing of head movement and respiratory data.
  • a Headband is a form factor that is easy for a user to sleep with when implemented correctly (e.g., with the headband portion of the headband-based biosensor system being well-padded and thin, with stress points appropriately positioned at suitable locations around the head of the user when worn). Sleep posture may impact sensing quality of electrodes in typical biosensing devices. Careful selection of electrode locations and algorithms built for signal quality sensing and dynamic re-referencing mitigate issues associated with disruptions to electrode subsets. Studies have been conducted by Applicant that demonstrate improvement of algorithm availability and accuracy resulting from these considerations.
  • the electrodes are designed so that they work well with various demographics, hairstyle, and head shape of users, which universal compatibility is supported by mechanical design and formfactor of the head band itself, as well as being supported by the electrodes and sensor positioning.
  • the various embodiments also accommodate for different user behaviors of wearing the headband-based biosensor system - for example, tight versus loose fit, which is solved by a set of algorithms (including, but not limited to, first through fifth algorithms described above with respect to Fig. 1, or the like) for signal quality monitoring and evaluations, as well as for noise reduction (via hardware, mechanical, and/or software-based approaches).
  • hardware (“HW”) approaches may include, without limitation, three-fold cascaded amplifying (“3CA”) approaches.
  • mechanical approaches may include, but are not limited to, hanging chamber design of bone conduction speaker (as described above with respect to Fig. 1, or the like), placement of electrodes (placed in locations that will generalize well across different head sizes, shapes, etc.), formfactor of electrodes (e.g., spider electrodes penetrate into hair and can make connection even if the band is being loosely worn due to the spider "arms," or the like), and/or the like.
  • the headband is designed to fit the various sizes of the head.
  • the way that the EMG signal is captured is different from conventional EMG signal capturing techniques and systems.
  • a through-hair sensor e.g., spider electrode, or the like
  • sensor data from IMU and all 4 channels (or electrodes) may be combined and processed. For example, if the IMU signal is strong, then correlation with the EMG signal may indicate that the EMG signal does not actually correspond to facial EMG.
  • Some application dependent examples include, but are not limited to: (A) deriving known events and indicators in the PPG data to inform signal separation for analysis of heart-related information; (B) using movement data (e.g., acquired from the IMU sensor) to suppress noise in EEG, EOG, and EMG signals for sensing and inferencing in dynamic environments; (C) synthesizing information from a subset of signal data for the detection of conditions, diseases, mental states, etc. (explicitly, an example of this is the use of brain, heart, and respiratory data together to detect the presence of obstructive sleep apnea, or the like); (D) referencing IMU sensor data to determine if high frequency activity is truly EMG signal data or movement related artifacts; or the like.
  • the various embodiments reduce such noise using 3CA (as described above).
  • the various embodiments may also utilize software ("SW") approaches, including, but not limited to, at least one of adaptive filtering, independent component analysis, principal component analysis, blind source separation, filtering, or machine learning -based approaches, and/or the like.
  • SW software
  • the machine learning -based approaches may include, without limitation, at least one of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like.
  • sensor synthesis i.e., combining information from various sources
  • sensor synthesis may be used for signal improvement
  • sensor synthesis may be used for noise suppression, or the like.
  • a headband formfactor enables the following locations of the head of users for EP and non-EP sensors to monitor: (1) the channel right above the ear may be used to monitor a big muscle group called the temporalis muscles that is part of the masticatory muscle group (including the masseter muscle), which is critical for many mental stages (such as stress, meditation, etc.) as well as sleep stages; (2) the channel on the forehead above the eye may be used to monitor electrical signals corresponding to EMG and/or EOG activity (corresponding to eye muscle movement and electrical activity, etc.) that reflect anger, focus, and the like, which are important for predicting mental stages of the user; (3) the offset forehead channels may be used for acquisition of data relevant to lateral eye movement; (4) one or more locations along the headband portion allows for easily capturing breathing sounds that would otherwise be very difficult to do from other locations (empirical data collected in lab settings support this, demonstrating the ability to capture such signals from various locations on the head and with various HW including, but not limited to, piezoelectric sensors, condenser elect
  • channel and “electrode” are used interchangeably.
  • a channel corresponds to a dimension of data (for example, an IMU sensor may have 3 channels corresponding to x, y, and z axes, while a PPG sensor may have different channels corresponding to the frequency of the light being measured).
  • an electrode that senses data is considered a channel.
  • two channels may be used to monitor the left side and the right side of the head of the user or to monitor the forehead region and the temporalis muscle group region, or the like.
  • six channels may be used to monitor the left and right temporalis muscles using two spider electrodes 245a (one on each side for through-hair contact with the skin), two behind-the-ear electrodes 245b (one on each side for skin contact behind the ears), and two forehead electrodes 245c and 245g (one on either side of the forehead), or the like.
  • a single electrode with multiple channels or contact points may be used to communicatively couple these six channels. Algorithms may then be used to extract different information or sensor data signals from each electrode communicatively coupled to multiple sensors (such as the single electrode described above).
  • the use of multiple location contact points enables the headband-based biosensor system to monitor EP data from the user's head regardless of movement of the headband portion of the headband-based biosensor and/or temporary loss of contact, and/or the like.
  • a single contact point e.g., electrode
  • a single contact point that collects different signals for the different EP sensors enables collection of all these types of signals while avoiding use of numerous potential pressure points with the use of a larger number of electrodes (thereby contributing to the comfort in wearing of the headband-based biosensor system).
  • acquisition of data from specific regions of the brain and/or acquisition of data from specific facial muscles groups may be implemented in a manner that is robust, sustainable, accurate, and/or precise.
  • the various embodiments may also provide for monitoring of some, if not all of, the following types of head-based signals: (a) Brain signals (or brain wave signals) - based on EEG data; (b) Eye motion signals - based on EOG data; (c) Facial muscle contraction signals - based on EMG data; (d) Head motion and position (e.g., posture) signals - based on IMU data; (e) Saturation of peripheral oxygen (“SpO2"), heartrate (“HR”), heartrate variability (“HRV”) signals - based on PPG data or pulse oximeter data; (f) Breathing and other head-based sound signals - based on microphone data or other sound sensor data, based on a vibration-based sensor data, based on an electrical sensor data (such as data from a piezoelectric sensor, etc.), based on PPG data, based on IMU data, and/or based on a combination of these sensor data; (g) Breathing motion signals - based on I
  • the various embodiments may also provide for high fidelity sensing during all times of day, by using at least one of the following: (i) Algorithms for channel quality evaluation; (ii) Algorithms for channel (or partial channel) rejection, selection, and/or referencing; (iii) Algorithms for artifact reduction and noise suppression; (iv) HW- based amplification for noise suppression and signal quality improvement; (v) Acquisition of signals from a large frequency range (e.g., 0 Hz - Hz) (which improves on the monitoring capabilities of conventional head-based systems that appear to ignore lower frequency data);
  • a large frequency range e.g., 0 Hz - Hz
  • the various embodiments may also provide for multimodal processing of the collected signals, including, but not limited to, at least one of: Real-time; Offline; On MCU; On mobile device; or On server; and/or the like.
  • Electrode and sensor location criteria Applicant has conducted many experiments to validate the locations of all sensors on the headband-based biosensor system, as shown and described in Fig. 2, for example. These locations have been validated by comparing important signal characteristics from the time, frequency, and time-frequency domains to those from clinically validated and FDA approved medical equipment (acting as a source of ground truth). Additionally, sensor placement decisions have also been driven by medical expertise and human biology. As examples, the frontal, temporal, and parietal bones of the head offer optimal locations for placement of sensing equipment on users from a wide demographic.
  • algorithms have been developed to sense aspects of user health and cognition. These algorithms may be developed to: (a) Detect events (including, but not limited to, one-time, recurring, short, long, instantaneous, and/or the like) that are apparent in the time, frequency, timefrequency, or latent representations of the sensor data; (b) Identify patterns reflective of cognitive or health conditions, diseases, phenomenon, and/or the like; (c) Assess a user's biological data in comparison to general populations, sub-groups, etc.; and/or the like.
  • Detect events including, but not limited to, one-time, recurring, short, long, instantaneous, and/or the like
  • Identify patterns reflective of cognitive or health conditions, diseases, phenomenon, and/or the like (c) Assess a user's biological data in comparison to general populations, sub-groups, etc.; and/or the like.
  • these algorithms may be designed using at least one of: manually annotated data; unlabeled data; data that has been annotated automatically with algorithms or semi-automatically by algorithms and manual annotators; statistical and historical data from users, surveys, and/or reports; and/or the like.
  • algorithms and/or models may be developed using at least one of: Supervised, unsupervised, semi-supervised, self-supervised, or reinforcement learning approaches, and/or the like; Statistical modeling; Heuristicbased approaches; or Rule-based approaches driven by expert or common knowledge, and/or the like.
  • Figs. 3A-3D are schematic diagrams illustrating a nonlimiting example 300 of the bone conduction speaker assembly (and its components) of Figs. 1 and 2, in accordance with various embodiments.
  • a bone conduction speaker assembly 315 (similar to bone conduction speaker assemblies 115a-115n and 215 of Figs. 1 and 2, or the like) is sandwiched between a tail cap 330 (which corresponds to padding material 130 in Fig. 1, or the like) and (an inner surface) of headband portion 310 of a headband-based biosensor system (such as headbandbased biosensor systems 105, 205, and 205' of Figs. 1, 2A-2C, and 2D-2H, or the like).
  • the bone conduction speaker assembly 315 may include, without limitation, a bone conduction speaker device 320 (similar to bone conduction speaker device 120 of Fig.
  • vibration plate 320a and transducer 320b similar to vibration plate 120a and transducer 120b, respectively of Fig. 1, or the like
  • a deformable speaker housing 325a similar to deformable speaker housing 125a of Fig. 1, or the like
  • an air gap 325b similar to air gap 125b of Fig. 1, or the like
  • the deformable speaker housing 325a may include, but is not limited to, a deformable material configured to compress toward the transducer 320b within the air gap 325b when the headband portion 310 is pressed up against the head of the user when the headband-based biosensor system is worn by the user, without the pressed-up headband portion 310 causing a shift in alignment of the corresponding vibration plate 320a relative to the head of the user.
  • Fig. 3A depicts a perspective view of a portion of headband portion 310 showing the tail cap 330, while Fig.
  • FIG. 3B depicts a see-through view of the tail cap 330 showing the bone conduction speaker assembly 315 (and its components) disposed within tail cap 330 and sandwiched between tail cap 330 and the headband portion 310.
  • Fig. 3C depicts an exploded view showing the relative positions of the headband portion 310, the tail cap 330, the bone conduction speaker 320, and the deformable speaker housing 325a.
  • Fig. 3D depicts a cross-section view the relative positions of the headband portion 310, the tail cap 330, the bone conduction speaker 320, and the deformable speaker housing 325a, as well as showing the air gap 325b between the transducer 320b and the deformable speaker housing 325a.
  • the vibration plate 320a is fixed in place (e.g., using adhesive, or the like) to an interior surface of the tail cap 330 (as also shown with respect to Figs. 3E and 3F below).
  • FIG. 4 is a diagram illustrating a non-limiting example 400 of a head of a user with labels for potential locations on the head for alignment and positioning of bone conduction speakers and/or sensors, in accordance with various embodiments.
  • a head 405 of a user is shown with labels numbered 1 through 8 indicating locations or regions 410 on the head 405.
  • the locations or regions 410 denote potential locations for which sensors or bone conduction speakers may be aligned or placed.
  • a headband-based biosensor system (such as headband-based biosensor systems 105, 205, and 205' of Figs. 1, 2A-2C, and 2D-2H, or the like) may have a headband portion (similar to headband portions 110, 210, and 310 of Figs. 1-3, or the like) that wrap around the head 405 of the user, in some cases, overlapping with locations or regions #3 - #7 (410), as shown in Fig. 4, when worn with the headband portion wrapping around the forehead of the user and above both ears of the user.
  • a headband portion similar to headband portions 110, 210, and 310 of Figs. 1-3, or the like
  • the headband-based biosensor system is designed such that the bone conduction speakers or bone conduction speaker assembly (similar to bone conduction speaker devices 120 and 320 and bone conduction speaker assemblies 115, 215, and 315 of Figs. 1-3, or the like) are positioned within the headband portion of the headband-based biosensor system to align with, and to make contact with, at least a portion of location or region #4 (410) in Fig. 4.
  • the bone conduction speakers or bone conduction speaker assembly similar to bone conduction speaker devices 120 and 320 and bone conduction speaker assemblies 115, 215, and 315 of Figs. 1-3, or the like
  • Location or region #4 (also referred to herein as "behind the ear(s)" or the like) provides the bone conduction speaker with contact with the parietal bone and/or the temporal bone, while minimizing pressing or pressure contact on blood vessels and nerves on the head 405 of the user when the headband portion is pressed up against the head of the user when the headband-based biosensor system is worn by the user (such as when the user is lying down, resting their head on a cushion, pillow, or other surface while sleeping or while resting, or resting their head on a seatback cushion (e.g., headrest portion of an office chair, a car seat, or an airplane seat, etc.).
  • a seatback cushion e.g., headrest portion of an office chair, a car seat, or an airplane seat, etc.
  • the structure of the bone conduction speaker assembly (e.g., the hanging chamber speaker design) in conjunction with the deformable materials of the headband-based biosensor system (including the materials of the headband portion, the materials of the tail cap or padding material, and/or the materials of the deformable speaker housing 325a, and/or the like), as described in detail above with respect to Figs. 1-3, also aid in minimizing the pressing or pressure contact.
  • the deformable materials of the headband-based biosensor system including the materials of the headband portion, the materials of the tail cap or padding material, and/or the materials of the deformable speaker housing 325a, and/or the like
  • the bone conduction speaker assemblies may be modular components that are disposed on corresponding one or more first portions of an inner surface of an article of headwear.
  • the article of headwear may include, without limitation, one of a headband-based biosensor system, a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR”) headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • each bone conduction speaker assembly may be one of affixed to, removably attachable to, or integrated with the inner surface of the article of headwear.
  • each transducer may include, but is not limited to, a cross-sectional shape comprising one of an ellipse, a circle, a rectangle, or other polygon, and/or the like.
  • the modular bone conduction speaker assemblies may be affixed to, removably attachable to, and/or integrated with the inner surface of the article of headwear to align with, and to make contact with, at least a portion of location or region #4 (410) in Fig. 4, for similar reasons as described above.
  • the bone conduction speaker assemblies may be modular components that may each be temporarily affixed to the skin of the head 405 of the user (such as, but not limited to, affixing to at least a portion of location or region #4 (410) in Fig. 4 for the reasons described above) via use of temporary adhesive, or the like.
  • the temporary adhesive may include, but is not limited to, adhesives used for medical bandages, medical electrode adhesives, hydrogel adhesives, or stretchable hydrogel adhesives, and/or the like.
  • Figs. 5A and 5B are diagrams illustrating various nonlimiting examples 500 and 500' of decomposing mixed signal data into multiple distinct sensor signal data each corresponding to one of the two or more different types of electrophysiological (“EP”) sensors, in accordance with various embodiments.
  • the mixed signal data 505 or 525 contains or superimposes an EEG signal 510 or 530, an EMG signal 515 or 535, and an EOG signal 520 or 540 (i.e., superimposing, in this case, signals from three different types of EP sensors).
  • signal processing is performed on the mixed signal data 505 ro 525 to decompose the mixed signal data 505 or 525 into (in these cases) three distinct sensor signals corresponding to the EEG signal 510 or 530, the EMG signal 515 or 535, and the EOG signal 520 or 540, as shown in Fig. 5.
  • the individual or distinct sensor signals may be analyzed individual (and, in some cases, in a correlated manner) to perceive or identify at least one biological and/or psychological state or condition of the user.
  • analysis of EMG signal 515 in Fig. 5A may result in determination of very low muscular activity, as indicated by the absence of any high amplitude activity throughout the depicted EMG signal 515.
  • analysis of the EOG signal 520 in Fig. 5A may result in determination of rapid eye movements, as indicated by the signal activity toward the end of the depicted EOG signal 520.
  • Correlated analysis of both the EMG signal 515 and the EOG signal 520 may result in perceiving that the user is in the rapid eye movement ("REM") stage of sleep.
  • REM rapid eye movement
  • EMG signal 535 may result in determination of muscular activity, as indicated by the high amplitude activity toward the beginning of the depicted EMG signal 535.
  • analysis of the EOG signal 540 in Fig. 5B may result in determination of eye movements, as indicated by the signal deflections toward the beginning and near the middle of the depicted EOG signal 540.
  • Correlated analysis of both the EMG signal 535 and the EOG signal 540 may result in perceiving that the user is in a wakeful state.
  • Fig. 5 depicts mixed signal data containing three types of EP signals
  • the various embodiments are not so limited, and the mixed signal data may contain any number of different types of EP signals as desired or as required.
  • selective deactivation of sensors may allow for at least one of sensor calibration, signal decomposition tests, or signal distinction of signals monitored by other sensors, and/or the like, regardless of the number of EP sensors communicatively coupled to an electrode or channel.
  • FIGs. 6A and 6B are flow diagrams illustrating a method 600 for implementing a headband with biosensor data monitoring, in accordance with various embodiments.
  • Method 600 of Fig. 6A continues onto Fig. 6B following the circular marker denoted, "A.”
  • Fig. 6 can be implemented by or with (and, in some cases, are described below with respect to) the systems, examples, or embodiments 100, 100', 200, 200', 300, 400, 500, and 500' of Figs. 1A, IB, 2A-2C, 2D-2H, 3, 4, 5A, and 5B, respectively (or components thereof), such methods may also be implemented using any suitable hardware (or software) implementation.
  • each of the systems, examples, or embodiments 100, 100', 200, 200', 300, 400, 500, and 500' of Figs. 1A, IB, 2A-2C, 2D-2H, 3, 4, 5A, and 5B, respectively can operate according to the method 600 illustrated by Fig. 6 (e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments 100, 100', 200, 200', 300, 400, 500, and 500' of Figs. 1A, IB, 2A-2C, 2D-2H, 3, 4, 5A, and 5B can each also operate according to other modes of operation and/or perform other suitable procedures.
  • method 600 at block 605, may comprise receiving, using a computing system, first electrophysiological ("EP") sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system.
  • EP electrophysiological
  • the received first EP sensor data from the first electrode may include, without limitation, first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each including, but not limited to, one of an electroencephalography (“EEG”) sensor, an electrooculography (EOG) sensor, an electromyography (“EMG”) sensor, or an electrocardiography (“ECG”) sensor, and/or the like.
  • EEG electroencephalography
  • EOG electrooculography
  • EMG electromyography
  • ECG electrocardiography
  • method 600 may comprise receiving, using the computing system, second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode.
  • the received second EP sensor data from each of the one or more second electrodes may include, without limitation, one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes.
  • Method 600 may further comprise, at block 615, applying, using the computing system, signal processing to the received EP sensor data (including at least one of the first EP sensor data (from block 605) or the second EP sensor data (from optional block 610; if applicable), and/or the like) to decompose each mixed signal data (e.g., the corresponding at least one of the first mixed signal data from the first electrode or the second mixed signal data from each second electrode (if applicable), and/or the like) into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors.
  • signal processing to the received EP sensor data (including at least one of the first EP sensor data (from block 605) or the second EP sensor data (from optional block 610; if applicable), and/or the like) to decompose each mixed signal data (e.g., the corresponding at least one of the first mixed signal data from the first electrode or the second mixed signal data from each second electrode (if applicable), and/or the like) into two or more distinct sensor signal data each corresponding to one of
  • Method 600 may further comprise, at block 620, analyzing the sensor data using the computer system; in some cases, this analysis includes individually analyzing, at least one of two or more decomposed, distinct sensor signal data corresponding to the first electrode and/or at least one of the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable).
  • Method 600 may further comprise receiving, using the computing system, first non-EP sensor data from each of one or more first non-EP sensors (optional block 625); and analyzing, using the computing system, the received first non-EP sensor data individually (optional block 630).
  • method 600 may comprise analyzing, using the computing system, the EP sensor signal data corresponding to the first electrode, the sensor data corresponding to each second electrode (if applicable), and the received first non-EP sensor data (if applicable) in a correlated manner.
  • Method 600 may comprise perceiving at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data, the second EP sensor data (if applicable), and/or the first non-EP sensor data (if applicable) were collected, based at least in part on analysis of the sensor signal data corresponding to the first electrode, based at least in part on individual analysis of sensor data corresponding to each second electrode (if applicable), based at least in part on individual analysis of the first non-EP sensor data (if applicable), and/or based at least in part on correlated analysis of the two or more decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and the received first non-EP sensor data (if applicable).
  • method 600 may comprise determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading, based at least in part on correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), or the like. If so, method 600 may continue to the process at block 645. If not method 600 may return to the process at block 605 (and, in some cases, optional block 610).
  • method 600 may comprise, based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, sending, using the computing system, data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device.
  • Method 600 either may return to the process at block 605 (and, in some cases, optional block 610) or may continue onto the process at block 650 in Fig.
  • the computing system may include, without limitation, at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of the headband-based biosensor system, a processor of one or more user devices among the at least one user device, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like.
  • the at least one user device may each include, without limitation, one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like.
  • the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one or more other designated entities, and/or the like.
  • the headband portion when the headband-based biosensor system is worn by the user, the headband portion may be wrapped around a forehead of the user and above both ears of the user, or the like.
  • the headband portion may be made of one or more materials including, but not limited to, at least one of polyurethane, thermoplastic polyurethane ("TPU”), silicone, or polycarbonate (“PC”), and/or the like.
  • the one or more first non-EP sensors may each include, but is not limited to, at least one of a photoplethysmography (“PPG”) sensor, an inertial measurement unit (“IMU”) sensor, an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the like.
  • PPG photoplethysmography
  • IMU inertial measurement unit
  • the one or more first non-EP sensors may comprise at least one of one or more non-EP sensors that are disposed within or on the headband-based biosensor system or one or more non-EP sensors that are disposed external, yet communicatively coupled, to the headband-based biosensor system.
  • perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving the at least one biological and/or psychological state or condition of the user based at least in part on at least one of: (i) one or more detected events comprising at least one of one or more onetime events, one or more recurring events, one or more short duration events, one or more long duration events, one or more instantaneous events, or two or more concurrent events, and/or the like, the one or more detected events corresponding to at least one of time, frequency, time-frequency, or latent representations, and/or the like, of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), or a combination of the at least one of the two or more decomposed, distinct sensor
  • the biological data of each user may include, without limitation, the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), or the combination of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), and/or the like.
  • high fidelity of at least one of the first EP sensor data, the second EP sensor data, or the first non-EP sensor data, and/or the like, regardless of motion of the user, orientation of a head of the user, or the headband-based biosensor system being pressed up against the head of the user may be achieved based at least in part on at least one of: (1) one or more first algorithms configured to evaluate channel quality of at least one of a first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; (2) one or more second algorithms configured to perform at least one of selecting, referencing, or rejecting one or more portions or components of at least one of the first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; (3) one or more third algorithms configured to reduce or suppress at least one of signal artifacts or signal noise in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like; (4) hardware
  • At least one of the one or more first algorithms, the one or more second algorithms, the one or more third algorithms, one or more first models corresponding to the one or more first algorithms, one or more second models corresponding to the one or more second algorithms, or one or more third models corresponding to the one or more third algorithms, and/or the like may be at least one of developed or updated using at least one of supervised machine learning, unsupervised machine learning, semi-supervised machine learning, self-supervised machine learning, reinforcement-based machine learning, statistical modeling, heuristic-based machine learning, or rule-based machine learning, and/or the like.
  • any sensor signal noise in at least one of the first EP sensor data, the second EP sensor data, or the first non-EP sensor data, and/or the like, that is due to motion of the user may be reduced based at least in part on at least one of filtering, adaptive filtering, independent component analysis, principal component analysis, blind source separation analysis, or machine learning - based noise filtering, and/or the like.
  • the machine learning -based noise filtering may be based on one of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like.
  • the motion of the user may include, but is not limited to, at least one of micro motions of the user (including, without limitation, motion due to breathing, motion due to snoring, eye movement (when the user is awake or during REM sleep, etc.), motion due to heart beating, etc.) or macro motions of the user (including, without limitation, walking, running, or performing other physical activities, movement during sleep, etc.) and/or the like.
  • micro motions of the user including, without limitation, motion due to breathing, motion due to snoring, eye movement (when the user is awake or during REM sleep, etc.), motion due to heart beating, etc.
  • macro motions of the user including, without limitation, walking, running, or performing other physical activities, movement during sleep, etc.
  • the headband portion may further include, but is not limited to, one or more straps that may be configured to tighten the headband portion around a head of the user in a closed band.
  • any sensor signal variances of at least one of the first EP sensor data, the second EP sensor data, and/or the first non-EP sensor data that are due to loose fit of the headband portion around the head of the user compared with a tight fit of the headband portion around the head of the user may be compensated based at least in part on at least one of: (a) one or more fourth algorithms configured to monitor signal quality in the at least one of the first EP sensor data or the one or more non-EP sensor data; (b) one or more fifth algorithms configured to reduce noise in the at least one of the first EP sensor data or the one or more non-EP sensor data; (c) placement of the first electrode the first portion of the headband portion, wherein the first portion may be determined to result in sensor signal variances regardless of loose fit or tight fit of the headband portion around the head
  • the one or more fourth algorithms may be similar, if not identical, to the one or more first algorithms, which is as described above.
  • the one or more fifth algorithms may be similar, if not identical, to the one or more third algorithms, which is as described above.
  • the signal processing of the received first EP sensor data may comprise multimodal processing including, but not limited to, at least one of real-time processing, near-real-time processing, online processing, offline processing, on-microcontroller-unit ("on-MCU”) processing, on-user-device processing, or on-server processing, and/or the like.
  • the individual and correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data may comprise multimodal analysis including, but not limited to, at least one of real-time analysis, near-real-time analysis, online analysis, offline analysis, on-microcontroller-unit (“on-MCU”) analysis, on-user-device analysis, or on-server analysis, and/or the like.
  • method 600 may comprise activating, using the computing system, at least one stimulation device disposed on one or more third portions of the headband portion.
  • each stimulation device may include, without limitation, one of an electrical stimulation device, a vibration-based stimulation device, an audio-based stimulation device, or a light-based stimulation device, and/or the like.
  • Each stimulation device may be configured to stimulate a physiological response in the user when activated.
  • activating the at least one stimulation device may be performed after determining that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user.
  • Method 600 may further comprise: receiving, using the computing system, updated first EP sensor data from the first electrode (block 655); receiving, using the computing system, updated second EP sensor data from each of the one or more second electrodes (optional block 660); applying, using the computing system, signal processing to the received updated EP sensor data (including at least one of the updated first EP sensor data (from block 655) or the updated second EP sensor data (from optional block 660; if applicable), and/or the like) to decompose each updated mixed signal data (e.g., the corresponding at least one of the updated first mixed signal data from the first electrode or the updated second mixed signal data from each second electrode (if applicable), and/or the like) into updated two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors (block 665); individually analyzing, using the computing system, at least one of the updated two or more decomposed, distinct sensor signal data corresponding to the first electrode and/or at least one of the updated two or more decomposed, distinct sensor signal data corresponding to
  • determining, using the computing system, whether and to what extent the perceived at least one biological and/or psychological state or condition of the user has changed may be based at least in part on individual analysis of each of at least one of the two or more decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), or the received first non-EP sensor data (if applicable), and/or based at least in part on correlated analysis of the at least one of the updated two or more decomposed, distinct sensor signal data corresponding to the first electrode with at least one of updated one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the updated two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the updated first non-EP sensor data (if applicable), or the like
  • analysis may be performed on the mixed signal data corresponding to each electrode.
  • algorithms, machine learning approaches, and/or learning models may be used to facilitate as well as enhance results of analysis based on the mixed signal data.
  • Fig. 6 is directed to a headband-based biosensor system, the various embodiments are not so limited, and a headwear-based biosensor system having similar functionality and at least some of the components of the abovedescribed headband-based biosensor system may be used in a similar manner as described with respect to the headband-based biosensor system.
  • the headwear-based biosensor system may be one of a stand-alone electronic device having a headband form factor, affixed to one or more portions of an inner surface of an article of headwear, removably attachable to the one or more portions of the inner surface of the article of headwear, or integrated within the inner surface of the article of headwear, and/or the like.
  • the article of headwear may include, but is not limited to, one of a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR”) headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • a headband a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality (“VR”) headset, an augmented reality (“AR”) headset, a mixed reality (“MR”) headset, or a bandana, and/or the like.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • Fig. 7 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.
  • Fig. 7 provides a schematic illustration of one embodiment of a computer system 700 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., headband-based biosensor systems 105, 205, and 205', server(s) 135', user devices 170a-170n and 175, and media content server(s) 180, etc.), as described above.
  • Fig. 7 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate.
  • Fig. 7, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.
  • the computer or hardware system 700 - which might represent an embodiment of the computer or hardware system (i.e., headband-based biosensor systems 105, 205, and 205', server(s) 135', user devices 170a-170n and 175, and media content server(s) 180, etc.), described above with respect to Figs. 1-6 - is shown comprising hardware elements that can be electrically coupled via a bus 705 (or may otherwise be in communication, as appropriate).
  • the hardware elements may include one or more processors 710, including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 715, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices 720, which can include, without limitation, a display device, a printer, and/or the like.
  • processors 710 including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like)
  • input devices 715 which can include, without limitation, a mouse, a keyboard, and/or the like
  • output devices 720 which can include, without limitation, a display device, a printer, and/or the like.
  • the computer or hardware system 700 may further include (and/or be in communication with) one or more storage devices 725, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like.
  • RAM random access memory
  • ROM read-only memory
  • Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.
  • the computer or hardware system 700 might also include a communications subsystem 730, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth device, an 802.11 device, a Wi-Fi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like.
  • the communications subsystem 730 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein.
  • the computer or hardware system 700 will further comprise a working memory 735, which can include a RAM or ROM device, as described above.
  • the computer or hardware system 700 also may comprise software elements, shown as being currently located within the working memory 735, including an operating system 740, device drivers, executable libraries, and/or other code, such as one or more application programs 745, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • an operating system 740 including, device drivers, executable libraries, and/or other code, such as one or more application programs 745, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein.
  • application programs 745 may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described here
  • one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.
  • a set of these instructions and/or code might be encoded and/or stored on a non-transitory computer readable storage medium, such as the storage device(s) 725 described above.
  • the storage medium might be incorporated within a computer system, such as the system 700.
  • the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon.
  • These instructions might take the form of executable code, which is executable by the computer or hardware system 700 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 700 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.
  • some embodiments may employ a computer or hardware system (such as the computer or hardware system 700) to perform methods in accordance with various embodiments of the invention.
  • some or all of the procedures of such methods are performed by the computer or hardware system 700 in response to processor 710 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 740 and/or other code, such as an application program 745) contained in the working memory 735.
  • Such instructions may be read into the working memory 735 from another computer readable medium, such as one or more of the storage device(s) 725.
  • execution of the sequences of instructions contained in the working memory 735 might cause the processor(s) 710 to perform one or more procedures of the methods described herein.
  • machine readable medium and “computer readable medium,” as used herein, refer to any medium that participates in providing data that causes a machine to operate in some fashion.
  • various computer readable media might be involved in providing instructions/code to processor(s) 710 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals).
  • a computer readable medium is a non-transitory, physical, and/or tangible storage medium.
  • a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like.
  • Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 725.
  • Volatile media includes, without limitation, dynamic memory, such as the working memory 735.
  • a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 705, as well as the various components of the communication subsystem 730 (and/or the media by which the communications subsystem 730 provides communication with other devices).
  • transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications).
  • Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 710 for execution.
  • the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer.
  • a remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 700.
  • These signals which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.
  • the communications subsystem 730 (and/or components thereof) generally will receive the signals, and the bus 705 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 735, from which the processor(s) 705 retrieves and executes the instructions.
  • the instructions received by the working memory 735 may optionally be stored on a storage device 725 either before or after execution by the processor(s) 710.
  • the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise.
  • the term "component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods does not limit the implementations unless specifically recited in the claims below. Thus, when the operation and behavior of the systems and/or methods are described herein without reference to specific software code, one skilled in the art would understand that software and hardware can be used to implement the systems and/or methods based on the description herein.
  • satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, and/or the like, depending on the context.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Epidemiology (AREA)
  • Cardiology (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Psychiatry (AREA)
  • Business, Economics & Management (AREA)
  • Ophthalmology & Optometry (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Physiology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

La présente invention concerne de nouveaux outils et techniques pour mettre en œuvre un serre-tête présentant une surveillance de données à biocapteur d'un système biocapteur basé sur un serre-tête. Dans divers modes de réalisation, un système informatique peut appliquer un traitement de signal pour décomposer des données de signal mixtes collectées au niveau d'une électrode, qui est disposée sur une partie serre-tête d'un système biocapteur basé sur un serre-tête, en données distinctes de signal de capteur correspondant chacune à chaque type de capteurs EP (par exemple, EEG, EOG, EMG, ECG, etc.) ; peut analyser au moins l'une des données distinctes de signal de capteur individuellement chacune et d'une manière corrélée à d'autres données distinctes de signal de capteur ou de données de capteur non-EP pour percevoir au moins un état ou une condition biologique et/ou psychologique de l'utilisateur et si une telle perception est indicatrice d'au moins un état ou une condition réel·le de l'utilisateur ; et, si cela est le cas, peut envoyer des données concernant ledit état ou ladite condition perçu·e de l'utilisateur audit dispositif utilisateur.
PCT/US2023/067519 2022-05-25 2023-05-25 Serre-tête présentant une surveillance de données à biocapteur Ceased WO2023230589A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263345520P 2022-05-25 2022-05-25
US202263345517P 2022-05-25 2022-05-25
US63/345,517 2022-05-25
US63/345,520 2022-05-25

Publications (1)

Publication Number Publication Date
WO2023230589A1 true WO2023230589A1 (fr) 2023-11-30

Family

ID=88920099

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2023/067518 Ceased WO2023230588A1 (fr) 2022-05-25 2023-05-25 Serre-tête comprenant des haut-parleurs à conduction osseuse
PCT/US2023/067519 Ceased WO2023230589A1 (fr) 2022-05-25 2023-05-25 Serre-tête présentant une surveillance de données à biocapteur

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2023/067518 Ceased WO2023230588A1 (fr) 2022-05-25 2023-05-25 Serre-tête comprenant des haut-parleurs à conduction osseuse

Country Status (1)

Country Link
WO (2) WO2023230588A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12296108B2 (en) 2022-03-10 2025-05-13 Therabody, Inc. Device for providing multiple types of therapy to a user
US12311116B2 (en) 2020-01-08 2025-05-27 Therabody, Inc. Wearable devices for providing vibration therapy to a user

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011017778A1 (fr) * 2009-08-14 2011-02-17 David Burton Système de monitorage d’anesthésie et de profondeur de conscience
WO2013134845A1 (fr) * 2012-03-13 2013-09-19 Hongyue Luo Système miniature portable de surveillance sanitaire et procédé correspondant
US20160287125A1 (en) * 2013-12-03 2016-10-06 Headsense Medical Ltd. Physiological and psychological condition sensing headset
CN105997021B (zh) * 2015-01-26 2019-08-02 周常安 穿戴式生理检测装置
US20190269345A1 (en) * 2019-05-21 2019-09-05 Roshan Narayan Sriram Methods and systems for decoding, inducing, and training peak mind/body states via multi-modal technologies

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2250822B1 (fr) * 2008-02-11 2014-04-02 Bone Tone Communications Ltd. Système sonore et procédé pour former un son
US20140185822A1 (en) * 2012-12-28 2014-07-03 Panasonic Corporation Bone conduction speaker and bone conduction headphone device
EP2908549A1 (fr) * 2014-02-13 2015-08-19 Oticon A/s Dispositif de prothèse auditive comprenant un élément de capteur
KR20160131822A (ko) * 2015-05-08 2016-11-16 주식회사 예일전자 진동 출력 장치 및 진동 출력 장치를 포함하는 휴대 전자 기기
PL3337185T3 (pl) * 2015-08-13 2022-01-10 Shenzhen Voxtech Co., Ltd Głośnik z przewodnictwem kostnym
US9497530B1 (en) * 2015-08-31 2016-11-15 Nura Holdings Pty Ltd Personalization of auditory stimulus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011017778A1 (fr) * 2009-08-14 2011-02-17 David Burton Système de monitorage d’anesthésie et de profondeur de conscience
WO2013134845A1 (fr) * 2012-03-13 2013-09-19 Hongyue Luo Système miniature portable de surveillance sanitaire et procédé correspondant
US20160287125A1 (en) * 2013-12-03 2016-10-06 Headsense Medical Ltd. Physiological and psychological condition sensing headset
CN105997021B (zh) * 2015-01-26 2019-08-02 周常安 穿戴式生理检测装置
US20190269345A1 (en) * 2019-05-21 2019-09-05 Roshan Narayan Sriram Methods and systems for decoding, inducing, and training peak mind/body states via multi-modal technologies

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12311116B2 (en) 2020-01-08 2025-05-27 Therabody, Inc. Wearable devices for providing vibration therapy to a user
US12296108B2 (en) 2022-03-10 2025-05-13 Therabody, Inc. Device for providing multiple types of therapy to a user

Also Published As

Publication number Publication date
WO2023230588A1 (fr) 2023-11-30

Similar Documents

Publication Publication Date Title
JP7708758B2 (ja) ウェアラブルデバイス
CN111867475B (zh) 次声生物传感器系统和方法
CN101467875B (zh) 耳戴式生理反馈装置
CN103720470B (zh) 用于收集和分析脑电图数据的系统和方法
JP6080278B2 (ja) 脳波データを収集し解析するためのシステムおよび方法
US11458279B2 (en) Sleep enhancement system and wearable device for use therewith
JP2021509842A (ja) ウェラブルコンピューティングデバイス
WO2012170816A2 (fr) Système et procédé de détection de début du sommeil
WO2023230589A1 (fr) Serre-tête présentant une surveillance de données à biocapteur
US20250375137A1 (en) Earphone, information processing device, and information processing method
US20250302319A1 (en) Systems and methods for monitoring and acting on a physiological condition of a stimulation system recipient
JP2018503481A (ja) 生体信号を取得するためのヘッドセット
Looney et al. Ear-EEG: Continuous brain monitoring
WO2010079257A1 (fr) Dispositif, appareil et procédé de mesure d'informations biologiques
US20240197228A1 (en) Electrode arrangement for measuring biopotentials on a person's head
US20220338810A1 (en) Ear-wearable device and operation thereof
KR20220124410A (ko) 뉴로피드백을 기반으로 하는 원격제어 및 자동화 두뇌훈련 시스템
CN117717323A (zh) 健康感测保持带
HK40012217A (en) A method and system for collecting and analyzing electroencephalographic data
Paul In-Ear Electrophysiology for Unobtrusive Auditory Brain-Machine Interfaces
HK40012217B (en) A method and system for collecting and analyzing electroencephalographic data
HK1228237B (zh) 用於收集和分析脑电图数据的系统和方法
HK1228237A1 (en) Systems and methods to gather and analyze electroencephalographic data
HK1196522B (en) Systems and methods to gather and analyze electroencephalographic data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23812795

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23812795

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 23812795

Country of ref document: EP

Kind code of ref document: A1