WO2025085747A1 - Wearable neurostimulation device - Google Patents
Wearable neurostimulation device Download PDFInfo
- Publication number
- WO2025085747A1 WO2025085747A1 PCT/US2024/051974 US2024051974W WO2025085747A1 WO 2025085747 A1 WO2025085747 A1 WO 2025085747A1 US 2024051974 W US2024051974 W US 2024051974W WO 2025085747 A1 WO2025085747 A1 WO 2025085747A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural activity
- subject
- ultrasound
- sensor data
- rem
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
- A61B5/374—Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/25—Bioelectric electrodes therefor
- A61B5/263—Bioelectric electrodes therefor characterised by the electrode materials
- A61B5/27—Conductive fabrics or textiles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/398—Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/6803—Head-worn items, e.g. helmets, masks, headphones or goggles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M21/02—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
- A61M2021/0038—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense ultrasonic
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2230/00—Measuring parameters of the user
- A61M2230/08—Other bio-electrical signals
- A61M2230/10—Electroencephalographic signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
- A61N2007/0004—Applications of ultrasound therapy
- A61N2007/0021—Neural system treatment
- A61N2007/0026—Stimulation of nerve tissue
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- the subject matter described relates generally to neurostimulation and, in particular, to a wearable device that provides neurostimulation to encourage lucid dreaming.
- Neurostimulation is the process of modulating the nervous system. Neurostimulation techniques offer a long list of potential benefits: pain relief, epilepsy management, depression, rehabilitation and more. Neurostimulation is a continuously evolving technological field and numerous techniques have been developed. One particular area of interest is using neurostimulation to induce lucid dreaming.
- Typical neurostimulation methods are non-dynamic processes. Methods like focused ultrasound, electrical current stimulation, and magnetic stimulation usually provide stimulation using a predefined set of parameters including intensity, duration, or duty cycle. This can limit the effectiveness of these techniques as they fail to account for the current state of the subject’s brain activity or the impact of other stimuli that may be present.
- a wearable neurostimulation device that uses feedback from electroencephalography (EEG) sensors to control neurostimulation of the subject.
- EEG electroencephalography
- One of the defining characteristics of a lucid dream versus a regular dream is the activation of the frontal lobe. This can be measured by a gamma frequency power shift.
- the wearable neurostimulation device can induce lucid dreams by creating this neural activation artificially using transcranial focused ultrasound.
- Transcranial focused ultrasound stimulation is a non-invasive technique and may provide various advantages over other techniques. These advantages can include focused modulation, improved depth of penetration, and reduced spread.
- transcranial focused ultrasound stimulation uses focused ultrasound beams that can be directed to specific brain regions.
- the focal point of an ultrasound beam can be adjusted to target a precise area of the brain, which can provide better spatial resolution than electricity-based techniques.
- transcranial focused ultrasound stimulation can also penetrate deeper into the brain than electricity -based techniques, allowing greater flexibility in which portions of the brain are stimulated.
- the currents used in electricitybased techniques can spread across the scalp and underlying tissues, while focused ultrasound beams can remain tightly focused on the target brain region, reducing off-target effects.
- the wearable neurostimulation device is a headset that includes one or more EEG sensors and a set of ultrasound transducers.
- the data generated by the EEG sensors is provided as input to a model (which may be hosted on the headset or an external device with a data connection to the headset) which generates specific instructions for the ultrasound transducers to provide targeted neurostimulation to induce lucid dreaming.
- FIG. l is a block diagram of neurostimulation system, according to one embodiment.
- FIG. 2 is a block diagram of the headset of FIG. 1, according to one embodiment.
- FIG. 3 is a block diagram of the client device of FIG. 1, according to one embodiment.
- FIG. 4 illustrates operation of the encoder of FIG. 3, according to one embodiment.
- FIG. 5A is a flowchart illustrating the operation of the encoder and a decoder in conjunction, according to one embodiment.
- FIG. 5B illustrates the transformer architecture, according to one embodiment.
- FIG. 6 illustrates the use of a set of driving signals to provide focused ultrasound, according to one embodiment.
- FIG. 7 shows an example headset design, according to one embodiment.
- FIG. 8 illustrates the structure of an ultrasound lens in a piezoelectric material, according to one embodiment.
- FIG. 9 illustrates a stack of ultrasound transducers in a piezoelectric material that may be used to focus an ultrasound beam on a target, according to one embodiment.
- FIG. 10 is a flowchart of a method for controlling a set of ultrasound transducers to induce a desired brainwave state, according to one embodiment.
- FIG. 11 is a block diagram illustrating an example of a computer suitable for use as a client device of FIG. 1, according to one embodiment.
- FIG. 1 illustrates one embodiment of a neurostimulation system 100.
- the neurostimulation system 100 includes a headset 110 connected to a client device 140.
- the headset 110 is connected to the client device 140 via a network 170, but this should be understood broadly to encompass any data connection between the devices, such as a Bluetooth® or other peer-to-peer connection.
- the neurostimulation system 100 includes different or additional elements.
- the functions may be distributed among the elements in a different manner than described.
- the headset 110 may include an integrated computing system that performs the functionality described below with reference to the client device 140, removing or lessening the need for a separate client to device to control the headset.
- the headset 110 includes one or more ultrasound transducers that can generate targeted ultrasound to stimulate specific portions of the wearer’s brain.
- the headset can also include one or more sensors (e.g., EEG sensors) to measure neural activity of the wearer.
- one or more combined EEG sensors and ultrasound transducers may be used.
- the measured neural activity may be used (e.g., by sending it to the client device 140 for processing) to control the ultrasound transducers.
- the combination of sensors and transducers may provide a positive feedback loop that induces a desired brainwave state in the wearer (e.g., brainwave states conducive to lucid dreaming, focus, meditation, or a positive mood, etc.).
- a desired brainwave state in the wearer e.g., brainwave states conducive to lucid dreaming, focus, meditation, or a positive mood, etc.
- the client device 140 is a computing device that can control operation of the headset.
- the client device 140 is a smartphone or other computing device of the wearer that runs a dedicated application (or “app”) for controlling the headset 110.
- the client device 140 may connect to the headset 110 via Bluetooth® and receive sensor data indicating neural activity of the wearer.
- a model e.g., a machine-learning model
- the model is trained or otherwise configured to generate instructions that stimulate portions of the user’s brain that induce a desired brainwave state.
- Various embodiments of the client device 140 are described in greater detail below, with reference to FIG. 3.
- the network 170 provides the communication channels via which the other elements of the networked computing environment 100 communicate.
- the network 170 can include any combination of local area and wide area networks, using wired or wireless communication systems. As described previously, the network 170 may additionally or alternatively include direct connections such as Bluetooth® or other peer-to-peer links.
- the network 170 uses standard communications technologies and protocols. For example, the network 170 can include communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, 5G, code division multiple access (CDMA), digital subscriber line (DSL), etc.
- networking protocols used for communicating via the network 170 include multiprotocol label switching (MPLS), transmission control protocol/Intemet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP).
- MPLS multiprotocol label switching
- TCP/IP transmission control protocol/Intemet protocol
- HTTP hypertext transport protocol
- SMTP simple mail transfer protocol
- FTP file transfer protocol
- Data exchanged over the network 170 may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML).
- HTML hypertext markup language
- XML extensible markup language
- some or all of the communication links of the network 170 may be encrypted using any suitable technique or techniques.
- FIG. 2 illustrates one embodiment of the headset 110.
- the headset 110 includes a REM sensor 205, one or more neural sensors 210, one or more ultrasound transducers 220, a connection module 230, and a controller 240.
- the headset 110 includes different or additional elements.
- the functions may be distributed among the elements in a different manner than described. For example, one or more combined neural sensors and ultrasound transducers may be used rather than these being two distinct sets of components in the headset 110.
- the REM sensor 205 collects data indicative of when the wearer is experiencing rapid eye movement (REM) sleep.
- the REM sensor 205 is one or more electrooculogram (EOG) sensors.
- EOG electrooculogram
- the EOG sensor or sensors measure electrical potential between the front and back of one or both of the wearer’s eyes that correlates with eye movement.
- the neural sensors 210 measure neural activity of the wearer.
- the neural sensors 210 include EEG sensors that measure voltage variations due to neural activity of the wearer and generate a corresponding EEG signal.
- the EEG signal can include various frequency components to include desired brainwave states in target portions of the brain.
- EEG signals with frequencies in the gamma band are of particular interest for inducing lucid dreaming.
- neural activity with a frequency of approximately 40 Hz is understood to be responsible for lucidity while asleep.
- stimulating neural activity in the fontal cortex in this frequency range can induce lucid dreaming.
- Other frequencies may be applied to include other desired brainwave states.
- the neural sensors 210 can include one or more functional near-infrared spectroscopy (fNTRS) sensors. These sensors enable continuous- wave fNTRS analysis of the wearer’s neural activity.
- Continuous-wave fNTRS is a non- invasive neuroimaging technique that quantifies brain-activity by measuring the changes in blood flow near a region-of-interest (ROI). It is accurate, portable, and robust to head movement.
- fNIRS technology takes advantage of the relative transparency of human tissue to light in the near-infrared (IR) optical window. IR light is directed onto and passes through skin and bone tissues but is absorbed by the blood.
- the fNIRS sensors measure the attenuation in the intensity of the light due to absorption, and an estimate of neural activity in the ROI can be calculated using the Beer-Lambert Law.
- the ultrasound transducers 220 generate ultrasound waves that can be focused to form beams to stimulate targeted portions of the brain with desired frequencies to induce a desired brainwave state.
- the ultrasound transducers 220 can perform the abovereferenced stimulation of the frontal cortex at approximately 40 Hz to induce lucid dreaming.
- Any suitable components may be used for generation and detection of ultrasound, including multi-element transducers (ultrasound arrays), an ultrasound generation system, or semiconductor transducers, such as capacitive micromachined ultrasonic transducers (CMUTs).
- CMUTs capacitive micromachined ultrasonic transducers
- the ultrasound transducers 220 are shown as a single element, some embodiments may use separate ultrasound transmitters and receivers.
- the connection module 230 manages the connection between the headset 110 and the client device 140.
- the connection module 230 provides the neural signal (e.g., an EEG signal) generated by the neural sensors 210 to the client device 140, which processes the neural signal using a model to generate instructions for the ultrasound transducers 220.
- the connection module 230 receives the instructions and passes them to the controller 240.
- the wearer puts on the headset before falling asleep it operates in a low- power mode in which only the REM sensor 205 is active while the neural sensors 210 and ultrasound transducers 220 are inactive.
- the connection module 230 provides the signals generated by the REM sensor 205 to the client device 140, which determines from these signals when the user enters a REM sleep state.
- the connection module 230 receives a signal from the client device 140 that the user has entered REM sleep and activates the neural sensors 210 and ultrasound transducers 220 to monitor the wearer’s neural activity and induce it into a desired state (e.g., lucid dreaming).
- the controller 240 generates control signals based on the instructions received from the client device 140 to drive the ultrasound transducers 220 to generate ultrasound beams that target the desired portions of the wearer’s brain to induce the desired brainwave state (e.g., lucid dreaming).
- the combination of the components of the headset 110 thus provide the feedback loop that can dynamically respond to the wearer’s neural activity to induce the desired response via neurostimulation.
- FIG. 3 illustrates one embodiment of the client device 140.
- the client device 140 includes a data ingest module 310, a REM detection module 315, an encoder 320, a decoder 330, and an instructions module 340.
- the client device 140 includes different or additional elements.
- the functions may be distributed among the elements in a different manner than described.
- the data ingest module 310 receives data generated by the REM sensor 205 and neural sensors 210 from the headset 110.
- the data ingest module 310 may preprocess the received data. For example, the data ingest module may perform quality control checks to detect transmission or sensor glitches, convert received signals into a target format for use with the model, filter aspects of the received signals that are not of interest (e.g., neural activity signals outside of the gamma band in a lucid dreaming application), and the like.
- the REM detection module 315 uses signals provided by the REM sensor 205 to detect when the wearer is experiencing REM sleep. The signals provided by the REM sensor 205 may be analyzed using one or more rules to determine whether the wearer is currently experiencing REM sleep.
- a frequency of changes in eye position above a first threshold amount may be calculated and if the frequency exceeds a second threshold amount, the wearer may be classified as experiencing REM sleep.
- a machine-learning classifier may be applied to the signals provided by the REM sensor 205 to determine whether the wearer is experiencing REM sleep.
- the classifier may be trained on a set of training data collected by REM sensors attached to other individuals and labelled based on a human expert’s determination of which portions of the data correspond to REM sleep versus other sleep (or being awake).
- the encoder 320 takes the EEG signals as input (subject to any preprocessing performed) and outputs a classification vector containing information about the neural activity represented by the EEG signal.
- the classification vector indicates one or more of a discrete set of labels that apply to the neural activity represented by the EEG signal.
- the classification vector may indicate that lucid dreaming is occurring or is not occurring, that increased activity is occurring on the right or left side of the brain (or both), or that no increase in neural activity is detected, etc.
- FIG. 4 illustrates one embodiment of the encoder 320.
- the encoder 320 has a transformer encoder architecture that receives EEG signals and applies convolutional matrix operations to pool and tokenize signals that can be used in a transformer model self-attention mechanism. This includes applying dot products, standardization scaling, softmax operations, a feedforward neural network, and multiple fully connected layers. The output of the encoder in the classification vector.
- the encoder may be trained on a dataset of manually labeled EEG signals to produce classification vectors with a desired degree of accuracy (e.g., at least a threshold value for both precision and recall, etc.).
- the classification vector may be provided as input to a decoder 330.
- the decoder 330 takes the classification vector as input and produces a sequence of one or more ultrasonic pulses that are predicted to induce the desired brainwave state in the wearer of the headset 110 given the current neural activity indicated by the EEG signal.
- FIG. 5A illustrates a general transformer architecture in which the encoder 320 and decoder 330 may be used together to generate instructions for generating ultrasound to induce a desired brainwave state.
- the decoder 330 is a generative decoder transformer block that can be trained based on historical data indicating pulse sequences that were successful and pulse sequences that were unsuccessful in inducing the desired brainwave state (e.g., lucid dreaming or conditions that encourage lucid dreaming in the wearer’s brain) given the starting neural activity of the wearer.
- the pulse sequence generated by the decoder 330 may include steering an ultrasound beam to mimic neural activation patterns observed in neural activity training data (e.g., fMRI training data) of individuals in the desired brainwave state.
- the model may be trained on mass collection of additional neural activity data (e.g., EEG data) of individuals in the desired brainwave state.
- This training creates a set of weights in the encoder block’s feed forward neural network to identify patterns of the user’s current brainwave state, which in turn is passed to the decoder block that generates the ultrasonic pulse sequences.
- any type of data that is indicative of brainwave state may be used for training, not just fMRI data and EEG data.
- FIG. 5B illustrates a specific embodiment of the transformer architecture that may be used with a combination of fMRI and EEG data, but that can also be adjusted to use any type of neural activity sensor data by fine tuning the transformer with appropriate training data.
- the instructions module 340 takes the pulse sequence generated by the decoder 330 and packages it as a set of instructions to send to the headset 110.
- the instructions may be sent to the headset 110 and implemented by the controller 240 to generate the pulse sequence generated by the decoder 330.
- FIG. 6 illustrates example driving signals that may be used to drive a set of elemental ultrasound transducers 220 to steer the resulting ultrasound beam.
- a set of driving signals where each transducer 220 is in phase may focus the beam directly in front of the set of transducers while staggering the phase from top to bottom or vice versa may focus the beam towards the bottom or top of the array of transducers 220, respectively.
- FIG. 7 illustrates an example embodiment of the headset 110.
- the headset includes a headband that can sit comfortably around the wearer’s head.
- the neural sensors 210 in this case, EEG sensors
- ultrasound transducers 220 may be disposed on an interior surface of the headband such that they rest against the wearer’s forehead. This position enables efficient stimulation of the frontal regions of the brain and similarly easy acquisition of feedback regarding the resulting neural activity.
- ultrasonic transducers typically have cylindrical shapes and are made of rigid materials, which is not conducive to wearer comfort, especially for a headset 110 intended to be worn during sleep.
- the comfort of the headband may be improved using ultrasound transducers 220 made from a piezoelectric fabric that can be incorporated directly into the headband.
- ultrasound transducers 220 made from a piezoelectric fabric that can be incorporated directly into the headband.
- Example piezoelectric materials include lead zirconate titanate (PZT) and polyvinylidene fluoride (PVDF).
- a conductive fabric may be made in a variety of ways that attach conductive elements to non-conductive fabric or by making the fabric itself from conductive threads.
- metallic thread weaving is used in which metal wires or metallic threads (e.g., made from silver, gold, stainless steel, or copper, etc.) are woven or knitted into traditional textiles. The result is a fabric that maintains much of its textile flexibility but can also conduct electricity.
- deposition or plating is used. Fabrics can be coated with a thin layer of metal using techniques like sputtering or chemical vapor deposition.
- Electroless plating may also be used, which is a chemical process where a fabric is soaked in a solution containing metal ions, and a chemical reducing agent is used to deposit the metal onto the fabric's fibers. In each case, the fabric retains its flexibility while the thin metallic layer imparts conductive properties.
- an intrinsically conductive polymer such as polyaniline or polythiophene may be used. These polymers can be blended with other textile fibers or coated onto fabrics to make them conductive.
- conductive inks or pastes may be used to create a conductive fabric.
- Conductive inks or pastes, containing materials like silver flakes or carbon particles, can be printed, painted, or screen-printed onto fabrics, providing them with conductive traces.
- carbon infusion or embedding/encapsulation of conductive particles may be used to create a conductive fabric.
- conductive carbon particles e.g., carbon black, graphite, or carbon nanotubes
- These carbon particles can be added during the fiber production process or coated onto existing fabrics.
- conductive particles such as silver nanoparticles or carbon nanotubes can be embedded into polymer fibers. When these fibers are woven or knitted into fabrics, the resulting textile is conductive.
- dipping or impregnation may be used to create a conductive fabric.
- a fabric is dipped into a conductive solution, coating it with a layer of conductive material. This can be done using solutions of conductive polymers or suspensions of metallic particles.
- the piezoelectric material can be integrated to provide one ultrasound focusing elements within the fabric.
- the piezoelectric material may be integrated by various techniques, such as weaving/braiding, deposition, or embedding.
- weaving/braiding a piezoelectric fiber is woven or braided into the fabric among the non-piezoelectric fibers.
- deposition processes such as sputtering, spincoating, or electrospinning are used to deposit the piezoelectric material onto the fabric.
- piezoelectric particles are embedded into polymers, and the resulting composite material is shaped into fibers or sheets that are used to make the fabric.
- FIG. 8 illustrates one embodiment of an ultrasound transducer formed from a piezoelectric material within a fabric.
- the transducer s geometry causes it to act as a lens for the generated ultrasound waves, focusing them towards a point.
- one side of the transducer is concave.
- the transducer inherently focuses the generated ultrasonic waves. It should be appreciated that other transducer geometries may be used to focus ultrasonic waves.
- a layer of piezoelectric material is included that has a cross section that causes it to focus ultrasonic waves (e.g., the crosssection shown in FIG. 8).
- Each successive layer of piezoelectric material can include a hole to allow ultrasound from the previous layer to pass through and a lens-geometry around the periphery to create and focus additional ultrasound waves, as illustrated in FIG. 9. Electrodes can be attached to the conductive materials allowing an electric current to flow through thus causing a vibration of the piezoelectric material and causing a focused ultrasonic beam to form.
- FIG. 10 illustrates a method 1000 for controlling a set of ultrasound transducers to induce a desired brainwave state in a subject, according to one embodiment.
- the steps of FIG. 10 are illustrated from the perspective of the client device 140 performing the method 1000. However, some or all of the steps may be performed by other entities or components. In addition, some embodiments may perform the steps in parallel, perform the steps in different orders, or perform different steps.
- the method 1000 begins with the client device 140 receiving 1010 neural activity sensor data.
- the neural activity data may be generated by one or more neural activity sensors (e.g., EEG sensors, fNIRS sensors, etc.) of the headset 110 and provided to the client device 140 via Bluetooth® or another data connection.
- the client device 140 generates 1020 a classification vector that represents brain activity of the wearer of the client device 140 by applying the neural activity data as input to an encoder.
- the client device 140 uses a decoder (e.g., a generative transformer decoder) to generate 1030 a pulse sequence from the classification vector.
- the pulse sequence is one predicted to induce or assist inducing a desired brainwave states in the wearer of the headset 110 in view of the neural activity data.
- the pulse sequence may be generated to induce a brainwave state conducive to lucid dreaming, focus, meditation, a positive mood, or any other state that may be classified from neural activity data.
- the client device 140 provides 1040 instructions to the headset that cause the transducers of the headset to generate the pulse sequence.
- the steps of FIG. 10 may be iterated to incrementally induce and maintain the desired brainwave state in the subject.
- Using a closed loop system in this way enables the impact of the generated pulses to be measured and corrections made automatically as needed. This also enables the system to dynamically adapt to changes in the subject’s brainwave state due to other factors.
- FIG. 11 is a block diagram of an example computer 1100 suitable for use as a client device 140.
- the example computer 1100 includes at least one processor 1102 coupled to a chipset 1104.
- the chipset 1104 includes a memory controller hub 1120 and an input/output (I/O) controller hub 1122.
- a memory 1106 and a graphics adapter 1112 are coupled to the memory controller hub 1120, and a display 1118 is coupled to the graphics adapter 1112.
- a storage device 1108, keyboard 1110, pointing device 1114, and network adapter 1116 are coupled to the I/O controller hub 1122.
- Other embodiments of the computer 1100 have different architectures.
- the storage device 1108 is a non-transitory computer-readable storage medium such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device.
- the memory 1106 holds instructions and data used by the processor 1102.
- the pointing device 1114 is a mouse, track ball, touchscreen, or other type of pointing device, and may be used in combination with the keyboard 1110 (which may be an on-screen keyboard) to input data into the computer system 1100.
- the graphics adapter 1112 displays images and other information on the display 1118.
- the network adapter 1116 couples the computer system 1100 to one or more computer networks, such as network 170.
- the network adapter 1116 may also provide direct connections to other devices, such as a Bluetooth® connection to the headset 110.
- computers used by the entities of FIGS. 1 through 3 can vary depending upon the embodiment and the processing power required by the entity. Furthermore, the computers can lack some of the components described above, such as keyboards 1110, graphics adapters 1112, and displays 1118.
- any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.
- the appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- use of “a” or “an” preceding an element or component is done merely for convenience. This description should be understood to mean that one or more of the elements or components are present unless it is obvious that it is meant otherwise.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Psychology (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- Anesthesiology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Hematology (AREA)
- Acoustics & Sound (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Pain & Pain Management (AREA)
- Signal Processing (AREA)
- Ophthalmology & Optometry (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
A neurostimulation headset includes one or more neural activity sensors, one or more ultrasound transducers, and a controller. The controller provides neural activity sensor data to a client device. The client device determines from the neural activity sensor data an ultrasound pulse sequence that is predicted to induce a desired brainwave state in a subject wearing the neurostimulation headset and sends instructions to the controller that causes the one or more ultrasound transducers to generate the ultrasound pulse sequence.
Description
WEARABLE NEUROSTIMULATION DEVICE
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application No. 63/591,942, filed October 20, 2023, which is incorporated by reference.
BACKGROUND
1. TECHNICAL FIELD
[0002] The subject matter described relates generally to neurostimulation and, in particular, to a wearable device that provides neurostimulation to encourage lucid dreaming.
2. BACKGROUND INFORMATION
[0003] Neurostimulation is the process of modulating the nervous system. Neurostimulation techniques offer a long list of potential benefits: pain relief, epilepsy management, depression, rehabilitation and more. Neurostimulation is a continuously evolving technological field and numerous techniques have been developed. One particular area of interest is using neurostimulation to induce lucid dreaming.
[0004] Typical neurostimulation methods are non-dynamic processes. Methods like focused ultrasound, electrical current stimulation, and magnetic stimulation usually provide stimulation using a predefined set of parameters including intensity, duration, or duty cycle. This can limit the effectiveness of these techniques as they fail to account for the current state of the subject’s brain activity or the impact of other stimuli that may be present.
Furthermore, many existing electrical and magnetic stimulation techniques have limited resolution with regard to targeting specific areas of the brain, which further limits their effectiveness.
SUMMARY
[0005] The above and other problems may be addressed by a wearable neurostimulation device that uses feedback from electroencephalography (EEG) sensors to control neurostimulation of the subject. One of the defining characteristics of a lucid dream versus a regular dream is the activation of the frontal lobe. This can be measured by a gamma frequency power shift. The wearable neurostimulation device can induce lucid dreams by creating this neural activation artificially using transcranial focused ultrasound. Transcranial
focused ultrasound stimulation is a non-invasive technique and may provide various advantages over other techniques. These advantages can include focused modulation, improved depth of penetration, and reduced spread.
[0006] With regard to focused modulation, transcranial focused ultrasound stimulation uses focused ultrasound beams that can be directed to specific brain regions. The focal point of an ultrasound beam can be adjusted to target a precise area of the brain, which can provide better spatial resolution than electricity-based techniques. With regard to depth of penetration, transcranial focused ultrasound stimulation can also penetrate deeper into the brain than electricity -based techniques, allowing greater flexibility in which portions of the brain are stimulated. Finally, with regard to reduced spread, the currents used in electricitybased techniques can spread across the scalp and underlying tissues, while focused ultrasound beams can remain tightly focused on the target brain region, reducing off-target effects.
[0007] In various embodiments, the wearable neurostimulation device is a headset that includes one or more EEG sensors and a set of ultrasound transducers. The data generated by the EEG sensors is provided as input to a model (which may be hosted on the headset or an external device with a data connection to the headset) which generates specific instructions for the ultrasound transducers to provide targeted neurostimulation to induce lucid dreaming.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] FIG. l is a block diagram of neurostimulation system, according to one embodiment.
[0009] FIG. 2 is a block diagram of the headset of FIG. 1, according to one embodiment.
[0010] FIG. 3 is a block diagram of the client device of FIG. 1, according to one embodiment.
[0011] FIG. 4 illustrates operation of the encoder of FIG. 3, according to one embodiment.
[0012] FIG. 5A is a flowchart illustrating the operation of the encoder and a decoder in conjunction, according to one embodiment.
[0013] FIG. 5B illustrates the transformer architecture, according to one embodiment.
[0014] FIG. 6 illustrates the use of a set of driving signals to provide focused ultrasound, according to one embodiment.
[0015] FIG. 7 shows an example headset design, according to one embodiment.
[0016] FIG. 8 illustrates the structure of an ultrasound lens in a piezoelectric material, according to one embodiment.
[0017] FIG. 9 illustrates a stack of ultrasound transducers in a piezoelectric material that may be used to focus an ultrasound beam on a target, according to one embodiment.
[0018] FIG. 10 is a flowchart of a method for controlling a set of ultrasound transducers to induce a desired brainwave state, according to one embodiment.
[0019] FIG. 11 is a block diagram illustrating an example of a computer suitable for use as a client device of FIG. 1, according to one embodiment.
DETAILED DESCRIPTION
[0020] The figures and the following description describe certain embodiments by way of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods may be employed without departing from the principles described. Wherever practicable, similar or like reference numbers are used in the figures to indicate similar or like functionality. Where elements share a common numeral followed by a different letter, this indicates the elements are similar or identical. A reference to the numeral alone generally refers to any one or any combination of such elements, unless the context indicates otherwise.
EXAMPLE SYSTEMS
[0021] FIG. 1 illustrates one embodiment of a neurostimulation system 100. In the embodiment shown, the neurostimulation system 100 includes a headset 110 connected to a client device 140. In the embodiment shown, the headset 110 is connected to the client device 140 via a network 170, but this should be understood broadly to encompass any data connection between the devices, such as a Bluetooth® or other peer-to-peer connection. In other embodiments, the neurostimulation system 100 includes different or additional elements. In addition, the functions may be distributed among the elements in a different manner than described. For example, in some embodiments, the headset 110 may include an integrated computing system that performs the functionality described below with reference to the client device 140, removing or lessening the need for a separate client to device to control the headset.
[0022] The headset 110 includes one or more ultrasound transducers that can generate targeted ultrasound to stimulate specific portions of the wearer’s brain. The headset can also include one or more sensors (e.g., EEG sensors) to measure neural activity of the wearer. In
some embodiments, one or more combined EEG sensors and ultrasound transducers may be used. The measured neural activity may be used (e.g., by sending it to the client device 140 for processing) to control the ultrasound transducers. The combination of sensors and transducers may provide a positive feedback loop that induces a desired brainwave state in the wearer (e.g., brainwave states conducive to lucid dreaming, focus, meditation, or a positive mood, etc.). Various embodiments of the headset 110 are described in greater detail below, with reference to FIGS. 2 and 7.
[0023] The client device 140 is a computing device that can control operation of the headset. In one embodiment, the client device 140 is a smartphone or other computing device of the wearer that runs a dedicated application (or “app”) for controlling the headset 110. The client device 140 may connect to the headset 110 via Bluetooth® and receive sensor data indicating neural activity of the wearer. A model (e.g., a machine-learning model) may be applied to sensor data to determine instructions for controlling the transducers that are sent back to the client device 140. The model is trained or otherwise configured to generate instructions that stimulate portions of the user’s brain that induce a desired brainwave state. Various embodiments of the client device 140 are described in greater detail below, with reference to FIG. 3.
[0024] The network 170 provides the communication channels via which the other elements of the networked computing environment 100 communicate. The network 170 can include any combination of local area and wide area networks, using wired or wireless communication systems. As described previously, the network 170 may additionally or alternatively include direct connections such as Bluetooth® or other peer-to-peer links. In one embodiment, the network 170 uses standard communications technologies and protocols. For example, the network 170 can include communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, 5G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network 170 include multiprotocol label switching (MPLS), transmission control protocol/Intemet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over the network 170 may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some
embodiments, some or all of the communication links of the network 170 may be encrypted using any suitable technique or techniques.
[0025] FIG. 2 illustrates one embodiment of the headset 110. In the embodiment shown, the headset 110 includes a REM sensor 205, one or more neural sensors 210, one or more ultrasound transducers 220, a connection module 230, and a controller 240. In other embodiments, the headset 110 includes different or additional elements. In addition, the functions may be distributed among the elements in a different manner than described. For example, one or more combined neural sensors and ultrasound transducers may be used rather than these being two distinct sets of components in the headset 110.
[0026] The REM sensor 205 collects data indicative of when the wearer is experiencing rapid eye movement (REM) sleep. In one embodiment, the REM sensor 205 is one or more electrooculogram (EOG) sensors. The EOG sensor or sensors measure electrical potential between the front and back of one or both of the wearer’s eyes that correlates with eye movement.
[0027] The neural sensors 210 measure neural activity of the wearer. In one embodiment, the neural sensors 210 include EEG sensors that measure voltage variations due to neural activity of the wearer and generate a corresponding EEG signal. The EEG signal can include various frequency components to include desired brainwave states in target portions of the brain. EEG signals with frequencies in the gamma band (from 25 Hz to 140 Hz) are of particular interest for inducing lucid dreaming. Specifically, neural activity with a frequency of approximately 40 Hz is understood to be responsible for lucidity while asleep. Thus, stimulating neural activity in the fontal cortex in this frequency range can induce lucid dreaming. Other frequencies may be applied to include other desired brainwave states.
[0028] Additionally or alternatively, the neural sensors 210 can include one or more functional near-infrared spectroscopy (fNTRS) sensors. These sensors enable continuous- wave fNTRS analysis of the wearer’s neural activity. Continuous-wave fNTRS is a non- invasive neuroimaging technique that quantifies brain-activity by measuring the changes in blood flow near a region-of-interest (ROI). It is accurate, portable, and robust to head movement. fNIRS technology takes advantage of the relative transparency of human tissue to light in the near-infrared (IR) optical window. IR light is directed onto and passes through skin and bone tissues but is absorbed by the blood. The fNIRS sensors measure the
attenuation in the intensity of the light due to absorption, and an estimate of neural activity in the ROI can be calculated using the Beer-Lambert Law.
[0029] The ultrasound transducers 220 generate ultrasound waves that can be focused to form beams to stimulate targeted portions of the brain with desired frequencies to induce a desired brainwave state. For example, the ultrasound transducers 220 can perform the abovereferenced stimulation of the frontal cortex at approximately 40 Hz to induce lucid dreaming. Any suitable components may be used for generation and detection of ultrasound, including multi-element transducers (ultrasound arrays), an ultrasound generation system, or semiconductor transducers, such as capacitive micromachined ultrasonic transducers (CMUTs). Furthermore, although the ultrasound transducers 220 are shown as a single element, some embodiments may use separate ultrasound transmitters and receivers.
[0030] The connection module 230 manages the connection between the headset 110 and the client device 140. In one embodiment, the connection module 230 provides the neural signal (e.g., an EEG signal) generated by the neural sensors 210 to the client device 140, which processes the neural signal using a model to generate instructions for the ultrasound transducers 220. The connection module 230 receives the instructions and passes them to the controller 240. In some embodiments (e.g., embodiments targeted to including lucid dreaming), when the wearer puts on the headset before falling asleep it operates in a low- power mode in which only the REM sensor 205 is active while the neural sensors 210 and ultrasound transducers 220 are inactive. The connection module 230 provides the signals generated by the REM sensor 205 to the client device 140, which determines from these signals when the user enters a REM sleep state. The connection module 230 receives a signal from the client device 140 that the user has entered REM sleep and activates the neural sensors 210 and ultrasound transducers 220 to monitor the wearer’s neural activity and induce it into a desired state (e.g., lucid dreaming).
[0031] The controller 240 generates control signals based on the instructions received from the client device 140 to drive the ultrasound transducers 220 to generate ultrasound beams that target the desired portions of the wearer’s brain to induce the desired brainwave state (e.g., lucid dreaming). The combination of the components of the headset 110 (in conjunction with the client device 140) thus provide the feedback loop that can dynamically respond to the wearer’s neural activity to induce the desired response via neurostimulation.
[0032] FIG. 3 illustrates one embodiment of the client device 140. In the embodiment shown, the client device 140 includes a data ingest module 310, a REM detection module 315, an encoder 320, a decoder 330, and an instructions module 340. In other embodiments, the client device 140 includes different or additional elements. In addition, the functions may be distributed among the elements in a different manner than described.
[0033] The data ingest module 310 receives data generated by the REM sensor 205 and neural sensors 210 from the headset 110. The data ingest module 310 may preprocess the received data. For example, the data ingest module may perform quality control checks to detect transmission or sensor glitches, convert received signals into a target format for use with the model, filter aspects of the received signals that are not of interest (e.g., neural activity signals outside of the gamma band in a lucid dreaming application), and the like. [0034] The REM detection module 315 uses signals provided by the REM sensor 205 to detect when the wearer is experiencing REM sleep. The signals provided by the REM sensor 205 may be analyzed using one or more rules to determine whether the wearer is currently experiencing REM sleep. For example, a frequency of changes in eye position above a first threshold amount may be calculated and if the frequency exceeds a second threshold amount, the wearer may be classified as experiencing REM sleep. Additionally or alternatively, a machine-learning classifier may be applied to the signals provided by the REM sensor 205 to determine whether the wearer is experiencing REM sleep. The classifier may be trained on a set of training data collected by REM sensors attached to other individuals and labelled based on a human expert’s determination of which portions of the data correspond to REM sleep versus other sleep (or being awake).
[0035] The encoder 320 takes the EEG signals as input (subject to any preprocessing performed) and outputs a classification vector containing information about the neural activity represented by the EEG signal. The classification vector indicates one or more of a discrete set of labels that apply to the neural activity represented by the EEG signal. For example, the classification vector may indicate that lucid dreaming is occurring or is not occurring, that increased activity is occurring on the right or left side of the brain (or both), or that no increase in neural activity is detected, etc.
[0036] FIG. 4 illustrates one embodiment of the encoder 320. In the embodiment shown, the encoder 320 has a transformer encoder architecture that receives EEG signals and applies convolutional matrix operations to pool and tokenize signals that can be used in a transformer
model self-attention mechanism. This includes applying dot products, standardization scaling, softmax operations, a feedforward neural network, and multiple fully connected layers. The output of the encoder in the classification vector. The encoder may be trained on a dataset of manually labeled EEG signals to produce classification vectors with a desired degree of accuracy (e.g., at least a threshold value for both precision and recall, etc.).
[0037] Referring back to FIG. 3, the classification vector may be provided as input to a decoder 330. The decoder 330 takes the classification vector as input and produces a sequence of one or more ultrasonic pulses that are predicted to induce the desired brainwave state in the wearer of the headset 110 given the current neural activity indicated by the EEG signal. FIG. 5A illustrates a general transformer architecture in which the encoder 320 and decoder 330 may be used together to generate instructions for generating ultrasound to induce a desired brainwave state. In this embodiment, the decoder 330 is a generative decoder transformer block that can be trained based on historical data indicating pulse sequences that were successful and pulse sequences that were unsuccessful in inducing the desired brainwave state (e.g., lucid dreaming or conditions that encourage lucid dreaming in the wearer’s brain) given the starting neural activity of the wearer. For example, the pulse sequence generated by the decoder 330 may include steering an ultrasound beam to mimic neural activation patterns observed in neural activity training data (e.g., fMRI training data) of individuals in the desired brainwave state. To supplement the closed-loop system, the model may be trained on mass collection of additional neural activity data (e.g., EEG data) of individuals in the desired brainwave state. This training creates a set of weights in the encoder block’s feed forward neural network to identify patterns of the user’s current brainwave state, which in turn is passed to the decoder block that generates the ultrasonic pulse sequences. It should be appreciated that any type of data that is indicative of brainwave state may be used for training, not just fMRI data and EEG data. FIG. 5B illustrates a specific embodiment of the transformer architecture that may be used with a combination of fMRI and EEG data, but that can also be adjusted to use any type of neural activity sensor data by fine tuning the transformer with appropriate training data.
[0038] Referring back to FIG. 3, the instructions module 340 takes the pulse sequence generated by the decoder 330 and packages it as a set of instructions to send to the headset 110. The instructions may be sent to the headset 110 and implemented by the controller 240 to generate the pulse sequence generated by the decoder 330.
[0039] FIG. 6 illustrates example driving signals that may be used to drive a set of elemental ultrasound transducers 220 to steer the resulting ultrasound beam. As shown in FIG. 6, a set of driving signals where each transducer 220 is in phase may focus the beam directly in front of the set of transducers while staggering the phase from top to bottom or vice versa may focus the beam towards the bottom or top of the array of transducers 220, respectively.
[0040] FIG. 7 illustrates an example embodiment of the headset 110. The headset includes a headband that can sit comfortably around the wearer’s head. The neural sensors 210 (in this case, EEG sensors) and ultrasound transducers 220 may be disposed on an interior surface of the headband such that they rest against the wearer’s forehead. This position enables efficient stimulation of the frontal regions of the brain and similarly easy acquisition of feedback regarding the resulting neural activity.
[0041] Typically, ultrasonic transducers have cylindrical shapes and are made of rigid materials, which is not conducive to wearer comfort, especially for a headset 110 intended to be worn during sleep. In some embodiments, the comfort of the headband may be improved using ultrasound transducers 220 made from a piezoelectric fabric that can be incorporated directly into the headband. By weaving together layers of conductive fabric with piezoelectric material with a cylindrical lens shape, the material is able to flexibly change shape while being able to generate focused ultrasound beams. Example piezoelectric materials include lead zirconate titanate (PZT) and polyvinylidene fluoride (PVDF).
[0042] A conductive fabric may be made in a variety of ways that attach conductive elements to non-conductive fabric or by making the fabric itself from conductive threads. In one embodiment, metallic thread weaving is used in which metal wires or metallic threads (e.g., made from silver, gold, stainless steel, or copper, etc.) are woven or knitted into traditional textiles. The result is a fabric that maintains much of its textile flexibility but can also conduct electricity.
[0043] In another embodiment, deposition or plating is used. Fabrics can be coated with a thin layer of metal using techniques like sputtering or chemical vapor deposition.
Electroless plating may also be used, which is a chemical process where a fabric is soaked in a solution containing metal ions, and a chemical reducing agent is used to deposit the metal onto the fabric's fibers. In each case, the fabric retains its flexibility while the thin metallic layer imparts conductive properties.
[0044] In a further embodiment, an intrinsically conductive polymer such as polyaniline or polythiophene may be used. These polymers can be blended with other textile fibers or coated onto fabrics to make them conductive.
[0045] In yet another embodiment, conductive inks or pastes may be used to create a conductive fabric. Conductive inks or pastes, containing materials like silver flakes or carbon particles, can be printed, painted, or screen-printed onto fabrics, providing them with conductive traces.
[0046] In additional embodiments, carbon infusion or embedding/encapsulation of conductive particles may be used to create a conductive fabric. With carbon infusion, conductive carbon particles (e.g., carbon black, graphite, or carbon nanotubes) are infused into fabrics to give them conductive properties. These carbon particles can be added during the fiber production process or coated onto existing fabrics. Similarly, conductive particles such as silver nanoparticles or carbon nanotubes can be embedded into polymer fibers. When these fibers are woven or knitted into fabrics, the resulting textile is conductive.
[0047] Finally, in some embodiments, dipping or impregnation may be used to create a conductive fabric. A fabric is dipped into a conductive solution, coating it with a layer of conductive material. This can be done using solutions of conductive polymers or suspensions of metallic particles.
[0048] When a conductive fabric is being woven, the piezoelectric material can be integrated to provide one ultrasound focusing elements within the fabric. The piezoelectric material may be integrated by various techniques, such as weaving/braiding, deposition, or embedding. With weaving/braiding, a piezoelectric fiber is woven or braided into the fabric among the non-piezoelectric fibers. With deposition, processes such as sputtering, spincoating, or electrospinning are used to deposit the piezoelectric material onto the fabric. With embedding, piezoelectric particles are embedded into polymers, and the resulting composite material is shaped into fibers or sheets that are used to make the fabric.
[0049] FIG. 8 illustrates one embodiment of an ultrasound transducer formed from a piezoelectric material within a fabric. The transducer’s geometry causes it to act as a lens for the generated ultrasound waves, focusing them towards a point. In the embodiment shown, one side of the transducer is concave. Thus, the transducer inherently focuses the generated ultrasonic waves. It should be appreciated that other transducer geometries may be used to focus ultrasonic waves.
[0050] As each layer of conductive fabric is created, a layer of piezoelectric material is included that has a cross section that causes it to focus ultrasonic waves (e.g., the crosssection shown in FIG. 8). Each successive layer of piezoelectric material can include a hole to allow ultrasound from the previous layer to pass through and a lens-geometry around the periphery to create and focus additional ultrasound waves, as illustrated in FIG. 9. Electrodes can be attached to the conductive materials allowing an electric current to flow through thus causing a vibration of the piezoelectric material and causing a focused ultrasonic beam to form.
EXAMPLE METHODS
[0051] FIG. 10 illustrates a method 1000 for controlling a set of ultrasound transducers to induce a desired brainwave state in a subject, according to one embodiment. The steps of FIG. 10 are illustrated from the perspective of the client device 140 performing the method 1000. However, some or all of the steps may be performed by other entities or components. In addition, some embodiments may perform the steps in parallel, perform the steps in different orders, or perform different steps.
[0052] In the embodiment shown, the method 1000 begins with the client device 140 receiving 1010 neural activity sensor data. The neural activity data may be generated by one or more neural activity sensors (e.g., EEG sensors, fNIRS sensors, etc.) of the headset 110 and provided to the client device 140 via Bluetooth® or another data connection. The client device 140 generates 1020 a classification vector that represents brain activity of the wearer of the client device 140 by applying the neural activity data as input to an encoder. The client device 140 uses a decoder (e.g., a generative transformer decoder) to generate 1030 a pulse sequence from the classification vector. The pulse sequence is one predicted to induce or assist inducing a desired brainwave states in the wearer of the headset 110 in view of the neural activity data. For example, the pulse sequence may be generated to induce a brainwave state conducive to lucid dreaming, focus, meditation, a positive mood, or any other state that may be classified from neural activity data. The client device 140 provides 1040 instructions to the headset that cause the transducers of the headset to generate the pulse sequence.
[0053] The steps of FIG. 10 may be iterated to incrementally induce and maintain the desired brainwave state in the subject. Using a closed loop system in this way enables the impact of the generated pulses to be measured and corrections made automatically as needed.
This also enables the system to dynamically adapt to changes in the subject’s brainwave state due to other factors.
COMPUTING SYSTEM ARCHITECTURE
[0054] FIG. 11 is a block diagram of an example computer 1100 suitable for use as a client device 140. The example computer 1100 includes at least one processor 1102 coupled to a chipset 1104. The chipset 1104 includes a memory controller hub 1120 and an input/output (I/O) controller hub 1122. A memory 1106 and a graphics adapter 1112 are coupled to the memory controller hub 1120, and a display 1118 is coupled to the graphics adapter 1112. A storage device 1108, keyboard 1110, pointing device 1114, and network adapter 1116 are coupled to the I/O controller hub 1122. Other embodiments of the computer 1100 have different architectures.
[0055] In the embodiment shown in FIG. 11, the storage device 1108 is a non-transitory computer-readable storage medium such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 1106 holds instructions and data used by the processor 1102. The pointing device 1114 is a mouse, track ball, touchscreen, or other type of pointing device, and may be used in combination with the keyboard 1110 (which may be an on-screen keyboard) to input data into the computer system 1100. The graphics adapter 1112 displays images and other information on the display 1118. The network adapter 1116 couples the computer system 1100 to one or more computer networks, such as network 170. The network adapter 1116 may also provide direct connections to other devices, such as a Bluetooth® connection to the headset 110.
[0056] The types of computers used by the entities of FIGS. 1 through 3 can vary depending upon the embodiment and the processing power required by the entity. Furthermore, the computers can lack some of the components described above, such as keyboards 1110, graphics adapters 1112, and displays 1118.
ADDITIONAL CONSIDERATIONS
[0057] Some portions of above description describe the embodiments in terms of algorithmic processes or operations. These algorithmic descriptions and representations are commonly used by those skilled in the computing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs comprising instructions for execution by a processor or equivalent electrical circuits,
microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of functional operations as modules, without loss of generality.
[0058] Any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Similarly, use of “a” or “an” preceding an element or component is done merely for convenience. This description should be understood to mean that one or more of the elements or components are present unless it is obvious that it is meant otherwise.
[0059] Where values are described as “approximate” or “substantially” (or their derivatives), such values should be construed as accurate +/- 10% unless another meaning is apparent from the context. From example, “approximately ten” should be understood to mean “in a range from nine to eleven.”
[0060] The terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
[0061] Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for inducing lucid dreaming using a headset that provide targeted ultrasound. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the described subject matter is not limited to the precise construction and components disclosed. The scope of protection should be limited only by the following claims.
Claims
1. A computer-implemented method of neurostimulation, the method comprising: receiving neural activity sensor data, the neural activity sensor data representing current neural activity of a subject; generating, from the neural activity data, a classification vector representing the neural activity of the subject; generating, from the classification vector, an ultrasound pulse sequence predicted to induce a desired brainwave state in the subject; and providing instructions to one or more ultrasound transducers to generate the ultrasound pulse sequence.
2. The computer-implemented method of claim 1, further comprising: receiving rapid eye movement (REM) sensor data, the REM sensor data representing current eye movements of the subject; determining, from the REM sensor data, that the subject is currently experiencing REM sleep; and responsive to determining that the subject is currently experiencing REM sleep, sending an instruction to activate one or more neural activity sensors associated with the subject to generate the neural activity sensor data.
3. The computer-implemented method of claim 2, wherein the REM sensor is an electrooculogram (EOG) sensor.
4. The computer-implemented method of any one of claims 1 through 3, wherein the neural activity sensor data are generated by one or more neural activity sensors that are part of a headset worn by the subject and the one or more ultrasound transducers are also a part of the headset worn by the subject.
5. The computer-implemented method of claim 4, wherein the one or more neural activity sensors include at least one of an electroencephalography (EEG) sensor or a functional near-infrared spectroscopy (fNTRS) sensor.
6. The computer-implemented method of claim 4, wherein the one or more ultrasound transducers includes at least one of a multi-element transducer, an ultrasound generation system, or a capacitive micromachined ultrasonic transducer (CMUT).
7. The computer-implemented method of any one of claims 1 through 6, wherein the classification vector is generated by an encoder of a machine-learning transformer and the ultrasound pulse sequence is generated by a decoder of the machine-learning transformer.
8. The computer-implemented method of any one of claims 1 through 7, wherein the steps of receiving neural activity sensor data, generating a classification vector, generating an ultrasound pulse sequence, and providing instructions to one or more ultrasound transducers to generate the ultrasound pulse sequence are iterated to induce and maintain the desired brainwave state in the subject.
9. The computer-implemented method of any one of claims 1 through 8, wherein the desired brainwave state is a brainwave state conducive to: lucid dreaming, focus, meditation, or a positive mood.
10. A non-transitory computer-readable storage medium comprising instructions for a method of neurostimulation, the instructions, when executed by a computing system, causing the computing system to perform operations including: receiving neural activity sensor data, the neural activity sensor data representing current neural activity of a subject; generating, from the neural activity data, a classification vector representing the neural activity of the subject; generating, from the classification vector, an ultrasound pulse sequence predicted to induce a desired brainwave state in the subject; and providing instructions to one or more ultrasound transducers to generate the ultrasound pulse sequence.
11. The non-transitory computer-readable storage medium of claim 10, wherein the operations further include: receiving rapid eye movement (REM) sensor data, the REM sensor data representing current eye movements of the subject; determining, from the REM sensor data, that the subject is currently experiencing REM sleep; and responsive to determining that the subject is currently experiencing REM sleep, sending an instruction to activate one or more neural activity sensors associated with the subject to generate the neural activity sensor data.
12. The non-transitory computer-readable storage medium of claim 10, wherein the REM sensor is an electrooculogram (EOG) sensor.
13. The non-transitory computer-readable storage medium of any one of claims 10 through 12, wherein the neural activity sensor data are generated by one or more neural activity sensors that are part of a headset worn by the subject and the one or more ultrasound transducers are also a part of the headset worn by the subject.
14. The non-transitory computer-readable storage medium of claim 13, wherein the one or more neural activity sensors include at least one of an electroencephalography (EEG) sensor or a functional near-infrared spectroscopy (fNIRS) sensor.
15. The non-transitory computer-readable storage medium of claim 13, wherein the one or more ultrasound transducers includes at least one of a multi-element transducer, an ultrasound generation system, or a capacitive micromachined ultrasonic transducer (CMUT).
16. The non-transitory computer-readable storage medium of any one of claims 10 through 15, wherein the classification vector is generated by an encoder of a machinelearning transformer and the ultrasound pulse sequence is generated by a decoder of the machine-learning transformer.
17. The non-transitory computer-readable storage medium of any one of claims 10 through 16, wherein the operations of receiving neural activity sensor data, generating a classification vector, generating an ultrasound pulse sequence, and providing instructions to one or more ultrasound transducers to generate the ultrasound pulse sequence are iterated to induce and maintain the desired brainwave state in the subject.
18. The non-transitory computer-readable storage medium of any one of claims 10 through 17, wherein the desired brainwave state is a brainwave state conducive to: lucid dreaming, focus, meditation, or a positive mood.
19. A neurostimulation headset configured to be worn by a subject, the neurostimulation headset comprising: a REM sensor configured to generate REM sensor data representing current eye movements of the subject; one or more neural activity sensors configured to generate neural activity sensor data representing current neural activity of a subject; one or more ultrasound transducers configured to generate ultrasound beams focused on target portions of the subject’s brain; and
a controller configured to: send the REM sensor data to a client computing device via a network; receive, from the client computing device via the network and in response to the REM sensor data, an instruction to activate the one or more neural activity sensors; provide the neural activity sensor data to the client device via the network; receive, from the client device via the network and in response to the neural activity sensor data, instructions to cause the one or more ultrasound transducers to generate a specified ultrasound pulse sequence that is predicted, based on the neural activity sensor data, to induce a desired brainwave state in the subject; and cause the one or more ultrasound transducers to generate the specified ultrasound pulse sequence.
20. A fabric comprising: an electrode; an ultrasound lens formed from a piezoelectric material; and a conductive material connecting the electrode to the ultrasound lens such that a current can flow causing a vibration of the piezoelectric material that in turn causes a focused ultrasound beam to form from an unfocused ultrasound wave.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363591942P | 2023-10-20 | 2023-10-20 | |
| US63/591,942 | 2023-10-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025085747A1 true WO2025085747A1 (en) | 2025-04-24 |
Family
ID=95448968
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/051974 Pending WO2025085747A1 (en) | 2023-10-20 | 2024-10-18 | Wearable neurostimulation device |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025085747A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190082990A1 (en) * | 2017-09-19 | 2019-03-21 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement |
| US20200139112A1 (en) * | 2016-09-19 | 2020-05-07 | Nyx Technologies Ltd | Multifunctional closed loop neuro feedback stimulating device and methods thereof |
| US20210353205A1 (en) * | 2018-09-13 | 2021-11-18 | Quantalx Neuroscience Ltd | A reliable tool for evaluating brain health |
| US20220062580A1 (en) * | 2020-08-26 | 2022-03-03 | X Development Llc | Multimodal platform for engineering brain states |
-
2024
- 2024-10-18 WO PCT/US2024/051974 patent/WO2025085747A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200139112A1 (en) * | 2016-09-19 | 2020-05-07 | Nyx Technologies Ltd | Multifunctional closed loop neuro feedback stimulating device and methods thereof |
| US20190082990A1 (en) * | 2017-09-19 | 2019-03-21 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement |
| US20210353205A1 (en) * | 2018-09-13 | 2021-11-18 | Quantalx Neuroscience Ltd | A reliable tool for evaluating brain health |
| US20220062580A1 (en) * | 2020-08-26 | 2022-03-03 | X Development Llc | Multimodal platform for engineering brain states |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Gao et al. | Interface, interaction, and intelligence in generalized brain–computer interfaces | |
| Bablani et al. | Survey on brain-computer interface: An emerging computational intelligence paradigm | |
| KR102096565B1 (en) | Analysis method of convolutional neural network based on Wavelet transform for identifying motor imagery brain waves | |
| Krepki et al. | The Berlin Brain-Computer Interface (BBCI)–towards a new communication channel for online control in gaming applications | |
| Yu et al. | Surface electromyography image-driven torque estimation of multi-DoF wrist movements | |
| Zhang et al. | Multiple kernel based region importance learning for neural classification of gait states from EEG signals | |
| KR20130142476A (en) | Brain wave analysis system using amplitude-modulated steady-state visual evoked potential visual stimulus | |
| Li et al. | An online P300 brain–computer interface based on tactile selective attention of somatosensory electrical stimulation | |
| CN119314625A (en) | A method for dynamic control optimization of digital acupuncture | |
| Khuntia et al. | Review of Neural Interfaces: Means for Establishing Brain–Machine Communication | |
| WO2025085747A1 (en) | Wearable neurostimulation device | |
| Mora et al. | A BCI platform supporting AAL applications | |
| Jeyakumar et al. | Brain-computer interface in Internet of Things environment | |
| CN120570598A (en) | A motion state monitoring and feedback system and method based on electroencephalogram signals | |
| Kapgate et al. | A review on visual brain computer interface | |
| CN109011096A (en) | A kind of system fed back based on brain electric nerve for the brain concentration function that trains soldiers | |
| Virdi et al. | Home automation control system implementation using SSVEP based brain computer interface | |
| Li et al. | Emotion recognition based on low-cost in-ear EEG | |
| Islam et al. | Frequency recognition for SSVEP-based BCI with data adaptive reference signals | |
| Lu et al. | A real-time brain control method based on facial expression for prosthesis operation | |
| Punsawad et al. | Self-flickering visual stimulus based on visual illusion for SSVEP-based BCI system | |
| Pan et al. | Lateral control of brain-controlled vehicle based on SVM probability output model | |
| Banik et al. | Design of mind-controlled vehicle (MCV) & study of EEG signal for three mental states | |
| Jishad et al. | Brain computer interfaces: the basics, state of the art, and future | |
| Sakthivel et al. | An exploration of human brain activity using BCI with EEG techniques |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24880648 Country of ref document: EP Kind code of ref document: A1 |