Docket No.2957092-000002-WO2 Filed December 23, 2024 UNITED STATES PATENT APPLICATION FOR: HEADSET APPARATUS Inventors: Jose Contreras-Vidal, Jeff Feng, Jose Gonzalez-Espana, Maxine Annel Pacheco- Ramírez, Lianne Sánchez-Rodríguez RELATED APPLICATIONS This application claims the benefit of United States Application No.63/614,460, filed December 22, 2023. TECHNICAL FIELD The disclosure herein involves a headset device for positioning of electroencephalogram (EEG) and electrooculography (EOG) sensors on the head of a human subject. BACKGROUND With the technology and signal processing advancements in recent years, mobile brain- body imaging (MoBI) systems are transformed into much more ambulatory devices, which opens up a potential for the advancement of the ecological validity of brain imaging research and more practical solutions for in-home medical monitoring and brain-computer interface (BCI) applications, as well in consumer electronics applications. With an increasing interest and demand in applying EEG scans in real-world environments, MoBI systems are developed to record brain dynamics during different tasks in the medical and non-medical fields. Even though there is an uptrend of developing commercial headsets in BCI-related research, consumer-like user-friendly headsets are still rare. Most of the commercially available portable systems are relatively expensive, require proprietary software to function, and lack flexibility or modularity. Ergonomically, headsets are not designed to be truly easy and intuitive to use. They often require trained technicians to help to put on the headset and operate the system. There is a growing need for a low-cost MoBI headset that can be set up with user-friendly ergonomics that is easy to operate and performs a quality scan and data collection consistently. Studies show most headsets on the market often do not fit as well as soft EEG caps. Headsets with a poor fit to the user will
Docket No.2957092-000002-WO2 Filed December 23, 2024 likely lose scanning signals due to the unstable sensor-skin contact and shifting position while in use. To date, traditional EEG caps are still the best in terms of accommodating both size and shape variation. There is a need for an easy to use one-hand operated headset that provides a custom fit for all users. INCORPORATION BY REFERENCE Each patent, patent application, and/or publication mentioned in this specification is herein incorporated by reference in its entirety to the same extent as if each individual patent, patent application, and/or publication was specifically and individually indicated to be incorporated by reference. SUMMARY OF THE INVENTION A headset device is described herein comprising under an embodiment a lower band comprising an outer surface and an inner surface, wherein a front portion of the lower band comprises at least one sensor positioned on the inner surface, wherein at least one arm extends from the lower band, wherein a proximal end of the at least one arm is rotatably attached to the lower band, wherein a distal end of the at least one arm comprises a sensor, wherein a rear portion of the lower band comprises adjustable straps for adjusting a circumference of the lower band. The headset device comprises an upper band comprising an upper surface and a lower surface, wherein at least one dry electrode component extends from the lower surface of the upper band, wherein the upper band is adjustably attached to the lower band. A method is described herein under an embodiment comprising configuring a headset device for detection of electrical signals, wherein the headset device includes a lower band and upper band, wherein the lower band comprising an outer surface and an inner surface, wherein a front portion of the lower band comprises at least one sensor positioned on the inner surface, wherein a rear portion of the lower band comprises adjustable straps for adjusting a circumference of the lower band, wherein the upper band comprises an upper surface and a lower surface, wherein the upper band is adjustably attached to the lower band. The method includes configuring a first coupling of at least one dry electrode component to the upper band, wherein the first coupling comprises the at least one dry electrode component extending from the lower surface of the upper band. The method includes configuring a second coupling of at least one
Docket No.2957092-000002-WO2 Filed December 23, 2024 sensor arm to the lower band, wherein the second coupling comprises the at least one sensor arm extending from the lower band, wherein a proximal end of the at least one sensor arm is rotatably attached to the lower band, wherein a distal end of the at least one sensor arm comprises a sensor. A headset device is described herein comprising a band comprising an outer surface and an inner surface, wherein the band comprises a plurality of EEG sensors located on an inner lateral area of the inner surface, wherein the inner lateral area corresponds to a temporal region of the wearer of the headset, wherein the band comprises at least one EOG sensor located on the inner surface above an eye of the wearer, wherein the band comprises a video camera on the outer surface, wherein the camera is directed in a line of sight of the wearer. In embodiments, the headset comprises adjustable straps for adjusting a circumference of the band. In embodiments, one or more applications running on at least one processor of the headset device. In embodiments, the one or more applications configured to receive an EEG signal from the plurality of EEG sensors. In embodiments, the one or more applications configured to receive an EOG signal from the at least one EOG sensor. In embodiments, the one or more applications configured to receive a motion signal from an accelerometer and a gyroscope sensor of the headset. In embodiments, the one or more applications configured to receive a video signal from the camera. In embodiments, wherein the EEG, EOG, video signal, and motion signal are synchronized. In embodiments, the one or more applications configured to filter the EEG signal, wherein the filtering comprises applying artifact removal to the EEG signal to remove eye blink noise. In embodiments, the filtering comprises applying artifact removal to the EEG signal to remove eye motion noise.
Docket No.2957092-000002-WO2 Filed December 23, 2024 In embodiments, the filtering comprises applying artifact removal to the EEG signal to remove movement noise. In embodiments, the artifact removal comprises H-Infinity adaptive noise cancellation filtering. In embodiments, the artifact removal uses information of the EOG signal to identify eye blink information. In embodiments, the artifact removal uses information of the accelerometer and gyroscope to identify movement information. In embodiments, the one or more applications configured to apply a trained learning model to the filtered EEG signal to classify the filtered EEG signal in a first duration of time. In embodiments, the one or more applications configured to analyze the video signal in the first duration of time to determine a context. In embodiments, the one or more applications configured to analyze the model classification and the context in the first duration to determine an emotional state of the wearer. In embodiments, the plurality of EEG sensors located adjacent to a FT7 region under the 10-20 system. In embodiments, the plurality of EEG sensors located adjacent to a FT8 region under the 10-20 system. In embodiments, the plurality of EEG sensors located adjacent to a T7 region under the 10-20 system. In embodiments, the plurality of EEG sensors located adjacent to a T8 region under the 10-20 system. In embodiments, the at least one EOG sensor located adjacent to a FP2 region under the 10-20 system. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows a perspective view of a headset device, under an embodiment. Figure 2 shows a rear view of a headset device, under an embodiment. Figure 3 shows a front view of a headset device under an embodiment. Figure 4 shows a right side view of a headset device, under an embodiment. Figure 5 shows a left side view of a headset device, under an embodiment.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Figure 6 shows a top view of a headset device, under an embodiment. Figure 7 shows a bottom view of a headset device, under an embodiment. Figure 8 shows a cross sectional view of a dry electrode component, under an embodiment. Figure 9 shows a cross sectional view of a dry electrode component, under an embodiment. Figure 10A shows a top view of a dry electrode component, under an embodiment. Figure 10B shows side view of a dry electrode component, under an embodiment. Figure 10C shows a side view of a dry electrode component, under an embodiment. Figure 10D show a perspective view of a dry electrode component, under an embodiment. Figure 11A shows a top view of a dry electrode assembly, under an embodiment. Figure 11B shows a side view of a dry electrode assembly, under an embodiment. Figure 11C shows a side view of a dry electrode assembly, under an embodiment. Figure 11D shows a perspective view of a dry electrode assembly, under an embodiment. Figure 12A shows a top view of a holder, under an embodiment. Figure 12B shows a side view of a holder, under an embodiment. Figure 12C shows a side view of a holder, under an embodiment. Figure 12D shows a perspective view of a holder, under an embodiment. Figure 13A shows a top view of a dry electrode assembly residing within a holder, under an embodiment. Figure 13B shows a side view of a dry electrode assembly residing within a holder, under an embodiment. Figure 13C shows a side view of a dry electrode assembly residing within a holder, under an embodiment. Figure 13D shows a perspective view of a dry electrode assembly residing within holder, under an embodiment. Figure 14 shows an exploded view of the headset device, under an embodiment. Figures 15A-15D show a dry electrode component, under an embodiment. Figures 16A-16D show a holder of a dry electrode component, under an embodiment. Figures 17A-17D show a dry electrode assembly, under an embodiment.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Figures 18A-18D show a dry electrode assembly positioned within a holder, under an embodiment. Figures 19A-19C show a cap attached to a holder, under an embodiment. Figures 20A-20D show a holder of a dry electrode component, under an embodiment. Figures 21A-21D show a proximal cylindrical body of a dry electrode assembly, under an embodiment. Figures 22A-22D show a cap of a dry electrode component, under an embodiment. Figure 23 shows Design criteria adopted in this research to maximize the translational impact of noninvasive (non-surgical) closed-loop BCI technology, under an embodiment. Figure 24 shows an exploded view of a headset along with dry electrode assembly, under an embodiment. Figure 25 shows a block diagram of an EEG amplifier board, under an embodiment. Figure 26 shows a custom EEG-based BCI headset with wireless tablet-based graphical user interface (GUI) and an IoT-enabled powered upper-limb exoskeleton robotic device deployed in a sample neurorehabilitation application, under an embodiment. Figure 27 shows impedance values from the open-loop sessions for five participants Figure 28 shows sensor data corresponding to eye blink and movement, under an embodiment. Figure 29 shows data corresponding to eyes closed, eyes open, and head movement task, under an embodiment. Figure 30 shows spectrogram and relative power data, under an embodiment. Figure 31 shows a user-friendly interface that presents real-time impedance measurements, and easy-to-use survey functionality for direct user feedback, and debugging interface, under embodiment. Figure 32 shows movement-related cortical potential data, under an embodiment. Figure 33 shows a closed-loop BCI–robot neurorehabilitation system, under an embodiment. Figure 34 shows movement-related cortical potential data, under an embodiment. Figure 35 shows inertial measurement unit specification, under an embodiment. Figure 36A-36D shows a raster plot of synchronized EEG, EOG and IMU data, under an embodiment.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Figure 37 shows a perspective view of a headset, under an embodiment. Figure 38 shows a front view of a headset with camera, under an embodiment. Figure 39 shows a front view of a headset with camera, under an embodiment. Figure 40 shows a perspective view of a headset with camera, under an embodiment. Figures 41-46 shows standard orthographic views of a headset with camera, under an embodiment. Figure 47 shows a system for emotion recognition, under an embodiment. Figure 48 shows a system for emotion recognition, under an embodiment. Figure 49 shows a perspective view of an exploded headset side panel, under an embodiment. Figure 50 shows an inside side panel view of an exploded headset side panel, under an embodiment. Figure 51 shows a front view of an exploded headset side panel, under an embodiment. Figure 52 shows a side view of a reference electrode arm, under an embodiment. Figure 53 shows a front view of a reference electrode arm, under an embodiment. Figure 54 shows a perspective view of a reference electrode arm, under an embodiment. Figure 55 shows a bottom view the front facing portion of the headset, under an embodiment. Figure 56 shows a front view of view the front facing portion of the headset, under an embodiment. Figure 57 shows an exploded interior view of the front facing panel, under an embodiment. Figure 58 provides an overview of the MindSpring device system, under an embodiment. Figure 59 shows a Sony earbud, under an embodiment. Figure 60 shows an attachment mechanism that integrates seamlessly with Sony earbuds, under an embodiment. Figure 61 shows a workflow for system processing and use of data, under an embodiment. DETAILED DESCRIPTION
Docket No.2957092-000002-WO2 Filed December 23, 2024 Figure 1 shows a perspective view of a headset device 100. The headset comprises a lower band 102 and an upper band 104. The lower band encircles the head of a subject while the upper band extends over the top of the head. The upper band may be designed to pass across anterior (frontal), central (as shown) or posterior areas of the skull. A front portion 106 of the lower band passes around the forehead while a rear portion 110 of the lower band terminates in a casing 112 positioned at the rear of the head. The inner side of the front portion 106 of the lower band positions one electrooculogram (EOG) sensor 114 along the forehead to measure electrical signals generated by eye blinks and eye movements. (Note that element number 114 in Figure 1 shows potential locations of such sensor. Further additional EOG sensors may be placed along the interior of the lower band’s front portion). The sensor is under one embodiment a flat snap EEG/ECG/EOG electrode with Silver/Silver Chloride (Ag/AgCl) Coating manufactured by Florida Research Institute, Cocoa Beach, FL. The lower band features four arms 108 that extend in a downward direction. The proximal end of each arm is attached to the lower band while the distal end features either an EOG or reference sensor. (Under the embodiment shown in Figures 1 and 14, the arms attached to the front portion of the lower band feature EOG sensors while the arms attached to the rear portion of the lower band feature reference/ground sensors). As seen in Figure 14 a protrusion 120 at the proximal end of each arm 108 is secured by press fit through a receiving hole 122 in the lower band. The distal end of each arm 108 is attached to a sensor positioner 124 which receives a securing post 126 of a sensor 128. Once secured to the lower band 102, the arm is rotatable around the axis of attachment. As seen in Figure 4, the arms may rotate laterally in directions A and B. The rotatable coupling of each arm allows a wide range of flexibility in placement of distally located sensors. As seen in Figures 2, 3, 7, and 14, the upper band 104 features five dry EEG electrode components (as described in greater detail below). The electrodes are under one embodiment a Spike snap EEG electrode with a Silver/Silver Chloride Coating manufactured by Florida Research Institute. Alternative embodiments may provide for additional or fewer dry electrode components. Figure 14 shows an exploded view of the device 100. The view of Figure 14 illustrates that the dry electrode components 118 are positioned between an upper portion 160 and lower portion 162 of the upper band 104. When the upper portion and lower portion are secured together (in a snap fit), the dry electrode components extend towards the subject’s head. Note that the curvature of the
Docket No.2957092-000002-WO2 Filed December 23, 2024 upper band fixes the downwardly extending electrodes along a path that matches the curvature of the subject’s head. The lower band 102 is adjustably attached to the upper band 104. As seen in Figure 14, the lower band attaches to the upper band using a tongue 164 and groove 168 configuration. A tongue component 164 extending from the lower band is received by groove 168 in component 160 of the upper band. Figure 1 shows the upper band in a minimal peripheral distance configuration, i.e., tongue component 164 is completely received within the groove 168. The tongue/groove attachment allows adjustable separation of the upper band from the lower band thereby increasing or decreasing the peripheral distance of the upper band. Figure 8 shows a cross sectional view of a dry electrode component 118. The dry electrode component comprises a cap 140, a holder 152, and a dry electrode assembly 119. The dry electrode assembly 119 includes a distal spike element 146 and proximal cylindrical body 144. (Note that the spike element 146 may have other form factors without changing the functional aspects of the dry electrode assembly 119). The cap is threadably attached to the holder. The holder comprises an annular structure which receives the dry electrode assembly 119. The proximal cylindrical body 144 holds the spike element 146. The inner surface of the holder features three helical threads 148. The proximal cylindrical body 144 features three protrusion buttons 151. Each button tracks a thread as best demonstrated 142 in Figure 8. As the dry electrode assembly 119 moves in a proximal and distal direction, the button/thread configuration rotates the dry electrode assembly and therefore the distally located spike element. Figure 8 shows the dry electrode assembly 119 in a fully extended position. Figure 9 shows the dry electrode assembly 119 in a fully retracted position. Under an embodiment, a compression spring is located in the space 154 between an upper surface of the dry electrode assembly 119 and the cap 140 which biases the dry electrode assembly towards a fully extended position. Therefore, the dry electrode assembly 119 remains in extended position when not in use. In operation, a user places the helmet device on user’s head. The user’s head then urges the dry electrode assembly 119 in a proximal direction as the helmet device is seated. This proximally directed force causes buttons (or protrusions) 151 to slide along corresponding helical grooves 148 resulting in angular rotation of the dry electrode assembly 119 including the spike element 146. The angular rotation of the dry electrode assembly tunnels a pathway through a user’s hair to ensure contact between electrical contacts and scalp in the device’s seated position. The spring biases the dry electrode
Docket No.2957092-000002-WO2 Filed December 23, 2024 assembly 119 towards an extended position to ensure continued contact during use. In operation, the dry electrode assembly 119 translates perpendicularly from the skull surface with a translation range of l0 mm. In other words, the range of the dry electrode assembly’s proximal and distal motion is 10 mm. Alternative embodiments implement shorter or longer distances depending on application or to accommodate varying hair styles. Figures 10A-13D show an electrode component under an alternative embodiment. Figures 10A-10D show a cap 150 threadably attached to a holder 152. Figures 11A-11D illustrate dry electrode assembly 154 which comprises a proximal cylindrical body and distal spike element 156. As opposed to the electrode shown in Figures 8 and 9, the proximal cylindrical component features two protrusion buttons 158. Figures 12A-12D show the holder 152. The inner surface of the holder features two helical threads 159. Each corresponding button tracks a thread as the dry electrode moves in a proximal and distal direction thereby rotating the dry electrode assembly 154 and the distally located electrode spike element. Figures 13A-13D show the dry electrode assembly 154 positioned within the holder 152. Note that the protrusion buttons are offset from the helical threads for purposes of illustration. As indicated above the rear portion 110 of the lower band 102 terminates within a housing 112. The housing includes separable front and rear components 170a and 170b. Figure 14 shows the two components 170a and 170b separated thereby revealing the interior components of the housing. The interior components include a gear bracket 172, a gear 174, and adjustment wheel 176. The rear component 170b attaches directly to the gear bracket 172. Through holes 182 located on an upper and lower surface of the rear component 170b receive screws 180 that threadably attach to corresponding screw bosses on an upper and lower surface of the gear bracket 172. When the rear compartment is in a secured position, the wheel 176, gear 174, and bracket 172 collapse upon each other. A post extending from the wheel engages a receiving hole in the gear in a press or interference fit. The post terminates at a stopper component 190 which resides between the front component 170a and gear bracket 172. Rotation of the wheel 176 then rotates gear 174. As shown in Figure 14, the rear portion of 110 of lower band 102 comprises two adjustable straps which feature openings 186, 188. A receiving hole 184 in gear bracket 172 receives and locates gear 174 in a position for engaging teeth cut into openings 186, 188 of the adjustable straps. Note that opening 186 features teeth cut along its lower edge. A corresponding opening 188
Docket No.2957092-000002-WO2 Filed December 23, 2024 features teeth cut along its upper edge. Front component 170a snap fits onto gear bracket 172 and secures the adjustable straps and corresponding openings 186, 188 against an interior surface of the gear bracket 172. The attached front component 170a secures the openings in an overlapping configuration. When the front compartment 170a and the rear compartment 170b are attached to the gear bracket 172, the receiving hole 184 of the gear bracket and the openings 186, 188 receive gear 174 such that teeth of gear 174 simultaneously engage lower teeth of opening 186 and upper teeth of opening 188. A lower portion of wheel 176 extends through an opening (see Figure 7, 190) on a lower surface of the housing 112. In operation a user rotates the wheel to adjust the circumference of the lower band 102. As the user adjusts the wheel clockwise, rotation of gear 174 extends the straps outwardly in opposing directions thereby increasing the circumference of lower band 102. As the user adjusts the wheel counterclockwise, rotation of gear 174 extends the straps inwardly in opposing directions thereby decreasing the circumference of lower band 102. The housing is fit with a rear component 198 under an embodiment. The rear component serves both decorative and protective purposes. Under an embodiment, the rear component includes circuitry coupled to the sensors (as deployed by the headset device and as described above). The circuitry is configured to receive, store, and/or transmit sensor data. The headset device may transmit information to remote systems, computing devices, or other components through on or more communication paths. The communication paths may include wireless connections, wired connections, and hybrid wireless/wired connections. The communication paths also include couplings or connections to networks including local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), proprietary networks, interoffice or backend networks, and the Internet. Furthermore, the communication paths may include removable fixed mediums like floppy disks, hard disk drives, and CD-ROM disks, as well as flash RAM, Universal Serial Bus (USB) connections, RS-232 connections, telephone lines, buses, and electronic messaging. Figures 15A-15D show a dry electrode component, under an embodiment. Figures 16A-16D show a holder of a dry electrode component, under an embodiment. Figures 17A-17D show a dry electrode assembly, under an embodiment. Figures 18A-18D show a dry electrode assembly positioned within a holder, under an embodiment.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Figures 19A-19C show a cap attached to a holder, under an embodiment. Figures 20A-20D show a holder of a dry electrode component, under an embodiment. Figures 21A-21D show a proximal cylindrical body of a dry electrode assembly, under an embodiment. Figures 22A-22D show a cap of a dry electrode component, under an embodiment. Figure 23 shows Design criteria adopted in this research to maximize the translational impact of noninvasive (non-surgical) closed-loop BCI technology, under an embodiment. Example 1 A wireless, low-cost, easy-to-use, mobile, dry-electrode headset for scalp electroencephalography (EEG) recordings for closed-loop brain–computer (BCI) interface and internet-of-things (IoT) applications is described herein under an embodiment. The EEG-based BCI headset was designed from commercial off-the-shelf (COTS) components using a multi- pronged approach that balanced interoperability, cost, portability, usability, form factor, reliability, and closed-loop operation. Main Results: The adjustable headset was designed to accommodate 90% of the population. A patent-pending self-positioning dry electrode bracket allowed for vertical self-positioning while parting the user’s hair to ensure contact of the electrode with the scalp. In the current prototype, five EEG electrodes were incorporated in the electrode bracket spanning the sensorimotor cortices bilaterally, and three skin sensors were included to measure eye movement and blinks. An inertial measurement unit (IMU) provides monitoring of head movements. The EEG amplifier operates with 24-bit resolution up to 500 Hz sampling frequency and can communicate with other devices using 802.11 b/g/n WiFi. It has high signal–to–noise ratio (SNR) and common–mode rejection ratio (CMRR) (121 dB and 110 dB, respectively) and low input noise. In closed-loop BCI mode, the system can operate at 40 Hz, including real-time adaptive noise cancellation and 512 MB of processor memory. It supports LabVIEW as a backend coding language and JavaScript (JS), Cascading Style Sheets (CSS), and HyperText Markup Language (HTML) as front-end coding languages and includes training and optimization of support vector machine (SVM) neural classifiers. Extensive bench testing supports the technical specifications and human-subject pilot testing of a closed-loop BCI application to support upper-limb rehabilitation and provides proof-of-concept validation for the device’s use at both the clinic and at home. Significance: The usability, interoperability, portability, reliability, and programmability
Docket No.2957092-000002-WO2 Filed December 23, 2024 of the proposed wireless closed-loop BCI system provides a low-cost solution for BCI and neurorehabilitation research and IoT applications. Introduction Since the early 1960s, when electroencephalography (EEG) data were first digitized and processed with a computer to today, much progress has been made in harnessing the potential of brain–computer interface (BCI) applications [1,2]. While EEG measurements are affected by many factors, including physiological and non-physiological artifacts [3] resulting in low signal-to-noise ratios [4], recent advancements in de-noising (e.g., [5,6]) and deep learning [7,8] techniques have driven the emergence of viable clinical and non-clinical BCI applications based on scalp EEG [1,2,9]. These applications include, but are not limited to, seizure state prediction [10,11], sleep stage analysis [12], cognitive workload assessment [13], motor-imagery-based brain–computer interface (BCI) systems [14,15], neurorehabilitation [16,17], multi-modal and multi-brain– computer interfaces [18], brain-controlled vehicles [19], EEG-based home control [20], virtual reality [21], and interactive virtual environments [22]. While the future of these proof-of-concept BCI-enabled applications is promising, there are a number of technical challenges that remain before the widespread translation and adoption of these systems is realized. Prior efforts from the scientific, engineering, medical, regulatory, industrial, and patient- advocate communities [23–26] have addressed the challenges and opportunities for accelerating the translation of closed-loop BCI systems for medical applications. Some of the key challenges identified in deploying these technologies to end-users include usability, interoperability, accessibility, and mobility, as well as the lack of standards (device, performance, clinical, and end- user metrics). For example, current commercial EEG amplifiers and BCI headsets are prohibitively expensive, lack interoperability, or fail to provide a high signal quality or closed-loop operation, which are vital for BCI applications [23]. To address these challenges and facilitate the translation of BCI systems, we adopted criteria derived from the above stakeholder meetings for the design of closed-loop BCI systems (Figure 23). Next, we briefly review these criteria. The reader is referred to the source publications from these stakeholder meetings for additional details. Figure 23 shows design criteria adopted in this research to maximize the translational impact of noninvasive (non-surgical) closed-loop BCI technology (adapted with permission from [23]).
Docket No.2957092-000002-WO2 Filed December 23, 2024 Portability [27] and interoperability both affect the type of BCI applications that can be considered. Most commercial EEG systems are tethered to immobile processing hardware, making them difficult to deploy outside of the clinic or laboratory. A portable and wireless EEG system is highly preferred so it can be used outside lab and clinical settings in clinical and non-clinical mobile applications at home, work, or play. Additionally, a system design that only provides control of a single device or the analysis of a single protocol significantly limits the potential for BCI systems, so a generalized control or analysis framework is preferred over a device-, task- or protocol-specific system to maximize interoperability in the widest sense. Usability [28,29], form factor [30], and reliability [31] all significantly affect the user’s experience. The current commercial EEG systems are generally difficult to set up and use, particularly in medical applications by users with disabilities. This is a critical challenge for applications that will be used by the public as a complex system setup may be too difficult or take too long for an untrained user to operate without technical or expert assistance. A difficult challenge in the design of an EEG headset is accommodating the many different head sizes and shapes, hair types and styling, and user preferences, but designing many different variations may not be economically feasible nor desirable for a commercial system. While a one-size-fits-all design is preferable, the ability for the system to be adaptable must be emphasized early in the design process and heavily tested in ecological settings. Moving this technology to low-cost hardware will increase accessibility, but, if the system is not reliable, the resulting user frustration may lead to product abandonment. Therefore, extensive software and hardware bench testing must be performed to ensure reliability. Outside of factors that affect the design considerations and the user’s experience, the ability for the system to process EEGs quickly and effectively is a necessary condition for complex closed-loop BCI applications. This necessity is due to the fact that EEG suffers from a low signal- to-noise ratio, low spatial resolution, and high prevalence of artifacts, such as eye movements, eye blinks, and motion artifacts [32], to name a few. Many of the commonly used signal de-noising methods are not suitable for real-time or mobile applications [5,6], so the selection of on-chip real- time signal-de-noising methods is a crucial decision that should be considered early on in the development process. Once the EEG signals are de-noised, a neural decoder or neural classifier is commonly employed to extract valuable information, e.g., motor intent, emotional state, or other classes of internal states, from the brain signals acquired with EEG [7,33]. However, most current
Docket No.2957092-000002-WO2 Filed December 23, 2024 EEG systems do not provide the decoding functionality necessary for implementing closed-loop BCI applications without additional hardware and software. The above challenges provided the motivation for the development of the proposed EEG-based closed-loop BCI headset. While there are low-cost commercial dry EEG amplifier systems available on the market, none meet the criteria outlined above in Figure 23. For example, the Ultracortex Mark IV EEG headset from OpenBCI [34] is a popular open-source EEG headset design and is sold for a relatively low cost ($399.99 for the user to 3D print the headset, $899.99 for the 3D-printed and assembled version at the time of publication). However, each headset electrode holder must be manually manipulated for each user, which is not as user-friendly as a design that employs a single manipulator for headset adjustments. Additionally, the OpenBCI headset does not provide processing onboard with the amplifier. Instead, it requires a separate computational unit for signal processing. The Muse 2 system [35] is one of the lowest-cost commercial amplifiers available ($249.99) and includes a software application that provides standard biofeedback. A major drawback with the Muse 2 system is that an annual subscription must be purchased to use many of the available software features. Additionally, the Muse 2 system only has two forehead sensors and two sensors located behind each ear, which limits the potential applications for systems based on this system. Like the OpenBCI Mark IV headset, the Muse 2 system does not have onboard processing capabilities, meaning a separate computing unit must be employed. In another example BCI system [36], the researchers designed specialized dry EEG electrodes for a low-channel-count EEG system for steady-state visual evoked potential (SSVEP) applications. The main focus was to validate the dry-electrode design, so a relatively expensive commercial amplifier (NeuroScan Synamps, CompuMedics Neuroscan, Victoria, Australia) was used. In another study [37], a low- cost system integrating EEG and augmented reality (AR) capabilities was deployed for SSVEP- based applications. Instead of creating a custom amplifier, a low-cost two-channel EEG system for signal acquisition (EEG-SMT, Olimex, Plovdiv, Bulgaria) was used. Under an embodiment, inexpensive BCI system for upper-limb stroke rehabilitation was developed. This system relied on a higher-cost Emotiv (Emotiv Epoc+, Emotiv, San Francisco, CA, USA [39]) commercial amplifier and utilized open-source functionality from BCI2000 [40], without a dedicated user- friendly interface. While the market for commercial EEG amplifiers is expanding, there are no suitable commercial systems that meet the specifications required for more closed-loop BCI applications. For a recent review of portable EEG devices with wireless capability, see [41].
Docket No.2957092-000002-WO2 Filed December 23, 2024 The rest of the paper is organized as follows: Section 2 will describe the methods, including hardware and software selection and development, as well as the methodology for system validation using bench testing and human-subject experiments in the laboratory, clinic, and home. Section 3 presents the results of the system validation tests, including first-in-human validation in an ecological setting. Section 4 provides a discussion on crucial design decisions and the development of the system generally. We conclude with some lessons learned and next steps. Methods The design criteria were based on the recommendations from stakeholder meetings [23– 26,42]. The design choices based on the design factors shown in Figure 23 will be discussed in detail through the following sections. To define the product and the engineering specifications for the system, we parcelled these target specifications into four key areas: the headset specifications for a universally fitting design, the desired characteristics for the EEG amplifier and sensors for artifact detection, and the specifications for the brain–computer interface itself. These specific engineering requirements are detailed in Table 1. The following section will detail the user-centered design of the headset, the development of the software, and the approach followed for bench testing and experimental validation with human participants for the system. Table 1. Engineering specifications for the proposed closed-loop BCI device. Headset Specifications Circumference Adjustment Range (cm) 52.3–61.2 Head Breadth Adjustment Range (cm) 13.8–16.6 Head Length Adjustment Range (cm) 17.3–21.4 Electroencephalography (EEG) Frontocentral (FC) 3, FC1, FCz, FC2, FC4 Electrode Locations EEG Electrode Type Dry Comb Electrodes Electrooculography (EOG) Both Temples, Above Left Eye Electrode Locations Reference Electrode Locations Mastoids EOG and Reference Electrode Type Dry Flat Electrodes Amplifier Specifications
Docket No.2957092-000002-WO2 Filed December 23, 2024 Number of Channels 8 Signal–to–Noise Ratio (SNR) (dB) 121 Input Noise (µVPP) 1.39 Common–Mode Rejection Ratio (CMRR) 110 (dB) Analog–to–Digital Converter (ADC) 24 Resolution (bits) Impedance (MΩ) 1000 Maximum Sampling Rate (Hz) 500
Docket No.2957092-000002-WO2 Filed December 23, 2024 Table 1. Cont. Amplifier Specifications Bandwidth (Hz) DC-131 Input range (mV) ±104 Resolution (µV) 0.012 Inertial Measurement Unit Specifications ADC 16 Gyro Full-Scale Range (dps) 250–2000 Acc Full-Scale Range (g) 2–16 Zero offset error (for 250 dps) 5 Zero-g Offset (mg) ±50 Power Consumption Acc+Mgn (mW) 0.58 Power Consumption Gyro (mW) 4.43 Brain–Computer Interface Specifications Processor Speed (GHz) 1 Processor Memory (MB) 512 Processor Storage (GB) 4 Open-Loop Sampling Frequency (Hz) 80 Closed-Loop Sampling Frequency (Hz) 40 Communication 802.11 b/g/n WiFi Backend Coding Language LabVIEW Frontend Coding Language JavaScript (JS), Cascading Style Sheets (CSS), HyperText Markup Language (HTML) Machine Learning Capability Support Vector Machine De-noising Capabilities Low- and High-Pass Filters; Adaptive Noise Cancellation Battery Capacity (kWh) 2.96
Docket No.2957092-000002-WO2 Filed December 23, 2024 Headset Design Proper headset fit for the users is a critical factor affecting the system’s performance, usability, and comfort, but most headsets on the market do not fit as well as research-grade soft EEG caps [43,44]. Traditional soft EEG caps are still the most widely available option in terms of accommodating both head size and shape variations [45,46]; however, they have some disadvantages compared to a headset: (1) Disinfection: Headsets can be disinfected by surface cleaning while EEG caps need to be immersed, after removing the electrodes, into a disinfection solution for several minutes; (2) Donning/doffing: Headsets are usually faster to set up than EEG caps, which may require assistance, particularly if based on wet electrodes; (3) Electrode localization: Headsets can help to maintain correct electrode positioning while EEG caps may result in electrode displacements from session to session; (4) Fitting: Headsets typically have a mechanism for fitting head shape and size, whereas EEG caps need to be selected in some discrete ranges varying from small to extra-large, which may lead to poor electrode set-up in some cases as head size variations are continuous; (5) Form factor: Headsets may be more desirable in terms of the aesthetics than EEG caps; (6) Single-hand use: Headsets may allow single-hand use for donning/doffing, which may be critical for users with hemiparesis or other hand disabilities. Overall, the wide range of variations in human body biometrics demands flexibility and adjustability in designing a more accommodating headset. Anthropometry data are widely used as a reference of variations to design products with optimized fit, comfort, functionality, and safety [47]. In terms of size management, there are two different approaches. One approach is to offer the headset in different sizes to fit a wide range of users. Another approach is to offer a single size with adequate adjustments in multiple degrees to fit all users. Previous research in the development of a one-size-fits-all headset has found success, providing support for this approach [48,49]. One important requirement in the design of mobile devices is the need for single-handed device interaction as the headset will likely be used by people with a limited attention span and upper-limb and/or hand impairments, including reduced mobility and hand dexterity (e.g., older individuals and persons with chronic stroke [17]). These physical limitations significantly influence the details of the design, the mechanical controls, and the overall form factor. As indicated in other studies [50], the hardware design influences the user’s interaction with the device. For this reason, the design process should include a detailed ergonomic evaluation to ensure all controls are intuitive for one-hand use.
Docket No.2957092-000002-WO2 Filed December 23, 2024 As a device to be used directly by consumers, general usability factors should be considered and optimized, including the overall weight, adjustability, operational clarity and accuracy, user comfort, and aesthetics [51]. Additionally, a good fixation of the scalp and skin electrodes should be provided for reducing the contact impedance at the electrode–scalp/skin interface, which enhances the signal-to-noise ratio [46]. Electrodes The headset design process began by selecting the locations of five EEG channels. Five electrode locations (Frontocentral locations: FC3, FC1, FCz, FC2, and FC4) were selected with a reference to the international 10–20 system provided by the American Clinical Neurophysiology Society guidelines [52]. These were selected based on the proximity to the primary motor cortex and the effectiveness of using these electrodes for motor-related BCI paradigms, including motor imagery classification [53] and movement-related cortical potential (MRCP) identification [17]. Electrode locations may be modified within the 3D headset model for paradigms that require EEG collection from other areas of the scalp. Dry EEG comb electrodes with 5 mm extended prongs (Florida Research Instruments, Inc., Cocoa Beach, FL, USA) were selected for this device to maximize the usability and shorten the set-up time. Comb electrodes [54] are recognized as an effective solution for collecting EEGs through longer-hair conditions and the selected electrodes end in blunt tips for long-term wearing comfort. While these dry electrodes alone will likely go through users’ different hairstyles and/or hair types to reach the scalp, without a specific mechanism to secure and maintain a constant steady contact during use, it is still likely to fail the needs of most users and needed to be addressed during the design of the EEG electrode holders. An additional functionality of the headset is the capability to measure eye movements and eye blinks using electrooculography (EOG) sensors, whose outputs could be used for real-time de- noising of the EEG signals or even as additional control signals. Ancillary experiments (to be reported elsewhere) provided support that three EOG sensors can be used to effectively extract information about eye blinks and eye movements in the vertical, horizontal, and oblique axes. These EOG sensor locations are located at the right temple, the left temple, and directly above the participant’s left eye.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Two electrodes, one behind each ear, complete the set of electrodes/sensors available in the headset. The skin sensors are adjustable in position and orientation to adapt to and fit a wide range of face profiles and contours while maintaining a constant and steady contact. EEG Electrode-Holder Design One challenge for mobile EEG systems is to secure the electrodes and obtain good impedance for recordings. This is particularly important when using dry electrodes that cannot benefit from the viscous gel typically employed in wet-electrode systems. For the dry electrodes that are placed over the user’s hair, it is common to experience unstable and noisy signals due to poor or intermittent contact between the electrode and the head scalp [55,56]. To meet this challenge, a unique self-positioning dry-hair electrode holder was developed, as shown in Figure 24B. The holder is a proprietary (patent-pending) design for holding the designated electrode while providing a self-positioning rotational mechanical linkage that helps facilitate hair penetration of the electrode tips. The holder is 1.7 cm in diameter and 1.9 cm in height and is composed of three parts: the slider, the housing, and the cap. A screw-and-nut pair is used to fasten the electrode tip to the lower end of the slider. The fully shielded electrical wire is oriented between the screw and the electrode’s inner wall. The wire is routed through the center open space and the center hole on the cap. The slider is spring-loaded with a vertical travel of up to 10 mm. The electrode will move up and down along three spiral tracks, which allows for rotation of up to 120 degrees, to accommodate the regionally changing head shape. This rotation will assist the electrode tip in moving through the user’s hair for improved contact with the scalp. The spring will help to maintain a constant pressure between the electrode and the contact surface. The headset and electrode tip design are covered by US provisional patent #62857263. Figure 24A shows a fully assembled one-size-fit-all headset design, under an embodiment.. Figure 24B shows a dry-electrode bracket. Figure 24C shows a skin sensor holder, under an embodiment. EOG Electrode-Holder Design The headset system includes three electrooculography (EOG) sensors to track the users’ eye movements. Two sensors are positioned at the temple area along the side of each eye and a third is positioned directly above the user’s left eye. Typically, EOG skin sensors require the application of a conductive gel medium or tape to achieve steady constant contact with the skin. This headset is designed with accessibility for individuals with limited dexterity, so it is
Docket No.2957092-000002-WO2 Filed December 23, 2024 undesirable to use sensors that require gel or tape. For that reason, the headset uses dry skin sensors. A proprietary EOG sensor-holder arm was developed to maintain a constant contact with the skin. The holder is composed of two parts: an arm and an EOG sensor plug (Figure 24C). The EOG sensor sits in the socket of the plug and is wired through the hollow arm, which is connected to the main board. The arm is printed in a medical-grade skin-safe flexible resin and is designed with a unique structure and form that makes it flexible while maintaining a constant pressure at the tip. The EOG plug is formed similarly to accordion pleats, which makes the plug compressible and can be flexed in any direction. The plug sits in an opening at the tip of the arm with an interference fit. The arm is rotatable around the connection on the structure to handle variations in face contours between users. The sensor plug’s spring motion applies a constant pressure to the skin surface to maintain a steady contact. Headset Size and Adjustment Mechanism Design Anthropometric data [57] were used to determine the overall device size in relation to the range of head size variations. The sizing parameters are referenced from the measurements of the smallest (5th percentile female) to the largest (95th percentile male) head sizes. The key dimensions in design consideration are the head breadth, circumference, and length. The size range in three dimensions provides a guide for the design of the adjustment mechanisms. The differences in head breadth, circumference, and length between the 5th percentile female and the 95th percentile male are 2.7 cm, 8.9 cm, and 4.1 cm, respectively. A digital mannequin corresponding to the 5th percentile female was developed and then scaled up to the 95th percentile male. These two digital mannequin models served as the basis to build the headset model in a 3D digital SolidWorks environment. Traditional anthropometry calculation is based on a uniform variation in several dimensions. For instance, if the head length increases, the head breadth is expected to also increase by a consistent ratio. In some cases, the head breadth and the head length do not follow the common ratio due to unique head forms. This characteristic was confirmed with the real-world data collection for this study, which helped to determine a more realistic range of deviation. Due to this discrepancy, the head-breadth-adjustment mechanism was designed to be independent of the head- length-adjustment mechanism. Based on the electrode mapping and the general mechanical
Docket No.2957092-000002-WO2 Filed December 23, 2024 adjustment concept, an initial headset structure was developed, which includes 3-degree of freedom adjustments with a sufficient range to fit 90% of all users. The final design (Figure 24A) utilizes a large dial (6.5 cm in diameter and 0.4 cm in thickness) in the back to adjust the overall circumference. The end of the ear-hub band is designed with gear teeth in a slot along the center line. The left and right band overlap in the electrical box where they connect to the dial through the gear. The outer perimeter of the dial is shaped with fine convex diamond textures. The dial protrudes 0.6 cm out of the box and is designed to be turned easily in both directions with one finger. The dial’s clockwise rotation will extend two ear-hub parts to increase the headset circumference, whereas the counter-clockwise rotation will contract two parts to reduce the circumference. The overall circumference adjustment range is up to 8.9 cm. With a unique semi-flexible structure design, the headset is a one-size-fits-all solution. Headset Fabrication The 3D model for the headset was designed with SolidWorks (SolidWorks 2019, Dassault Systemes, Vélizy-Villacoublay, France) and prototyped with a 3D printing process. Two types of printers were used in producing the prototype. An Artillery Sidewinder X1 FDM printer (manufactured by Shenzhen Yuntu Chuangzhi Technology Co., Ltd., Shenzhen, China) was used for the rigid-structure printing, while a Saturn resin printer (manufactured by ELEGOO technology Co., Ltd., Shenzhen, China) was used to print the flexible components. Two medical-grade thermoplastic resins were selected for the primary headset components: Taulman Nylon 910 (produced by Taulman3D Material, Linton, IN, USA) and Flexible 80A resin (produced by Formlabs in Somerville, MA, USA). The Taulman Nylon 910 resin was used to build the rigid structural parts of the headset as it has similar strength and stiffness to polypropulene (PP) and is FDA-approved for skin contact, and the parts can be repeatedly bent while still returning to the original shape. The Flexible 80A resin was used to build all elastic parts and is also FDA-approved for skin contact. The resulting flexible headset parts are stiff but soft with an 80A shore durometer. In addition to these two primary resins, two additional resins were used for the internal components. Esun PLA+ was used to fabricate the rear adjustment plate and dry EEG brackets while Polymax PC resin (Polymaker, Shanghai, China) was used to fabricate the ratchet gear and adjustment dial. The finalized design is presented in Figure 24. From an aesthetic standpoint, an emphasis was placed on creating a headset with clean and smooth external surfaces.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Design of the BCI Module The following subsections detail the hardware and software component selections and development for the BCI module. Hardware Selections and Development The primary hardware considerations of the BCI module include the selection of the processing unit, the design and manufacturing of the custom amplifier, and the power system. Processor Selection The BeagleBone Black—Wireless (BBB-W) [58] was selected as the BCI processor for its low cost, availability, compatibility, and WiFi capabilities. Moreover, the availability of an open- source LabVIEW toolkit (LINX LabVIEW [59]) significantly reduced the software redesign. The BBB-W has a 1 GHz ARM processor, 512 MB of DDR3 RAM, and 4 GB of onboard storage, providing the computational power and storage space necessary for the BCI headset. Design of the Integrated Amplifier and Processing Board In EEG systems, an instrumentation amplifier acts to increase the amplitude of the detected signal to a level that can be further processed while an input buffer amplifier eliminates the need for impedance matching. Recently, the term amplifier has been broadened to also include the digitization of the analog signal through an analog-to-digital conversion (ADC) chip, wireless communication, and motion-detection system. In the proposed BCI system, there are three main components on the amplifier board: signal amplification, analog-to-digital conversion, and motion sensing. Following the ADC step, it is necessary to pre-process the signals before transmission to the processing unit. These steps are summarized in Figure 25. Figure 25 shows a block diagram of the EEG amplifier board, under an embodiment. With respect to the amplifier, there are some electrical characteristics that are expected with any EEG amplifier [60]. The ADS1299 chip from Texas Instruments (Dallas, TX, USA) [61] was selected as it best matched the intended functionality. Its characteristics are summarized in Table 1—section Amplifier Characteristics. The minimum requirements for the inertial measurement unit (IMU), which provides motion sensing, were low energy consumption, a digital signal with more than 10 bits resolution, and the inclusion of a 3-axis accelerometer and a 3-axis gyroscope. Table A1 in the Appendix A section presents the characteristics of the ICM-20948 [62], which was selected because of its low error, its low power consumption, and the availability of a magnetometer.
Docket No.2957092-000002-WO2 Filed December 23, 2024 For communication between the amplifier and the processing board, either an integrated approach or a system that relies on Bluetooth for communication between these modules must be selected. Rather than develop independent amplifier and processing board hardware modules that would communicate over Bluetooth, the possibility of missing data packets in this crucial stage, Bluetooth’s line-of-sight requirement, and the computational capabilities of the BBB made an integrated amplifier and processing unit more desirable. For this combined unit, the serial peripheral interface (SPI) communication protocol was employed for communication between the processing unit and the directly connected amplifier. Power System The BBB amplifier is powered by a relatively small 3.7 V battery (BatterySpace p/n: PL- 383562-2C single cell Polymer Li-Ion 3.7 V/800 mAh/2.96 Wh, 64 mm × 36 mm × 4 mm/ 18 g, UL listed, UN approved battery) because portability was an important design factor [63]. Based on the maximum expected power consumption of 1.48 kWh from our system due to signal processing and constant communication with an external device (e.g., smart phone or tablet), the battery guarantees at least two hours of use. For charging of the battery, the procedure described in the “Battery Power Source/Charger” section of the OSD3358 Application Guide [64] was implemented for the system. Software For the development of the device, LabVIEW (National Instruments Inc., Austin, TX, USA) was selected as the primary coding language due to its extensive libraries and access to National Instruments’ hardware and software in the early phases of the design. We note, however, that any coding language could instead be used with the selected hardware, and, in fact, a C++ version of the BCI firmware module has also been developed. This section details the main considerations, modular design, and resulting open- and closed-loop characteristics for the system software. The primary focus throughout the software development was on maintaining real-time capability, modularity, and flexibility to implement different BCI applications, thereby increasing the interoperability of the system. Firmware While LabVIEW real-time toolkits can sample at a constant frequency, this functionality requires the National Instruments onboard hardware clock, so setting a constant sampling frequency through LabVIEW is not possible on third-party processing boards. The firmware
Docket No.2957092-000002-WO2 Filed December 23, 2024 designed for the system instead employs spline interpolation, so the system can sample EEG and EOG at a rate set by the user, limited only by the computational power of the processing board. We have also developed a faster C++ implementation that does not require interpolation. Communication The BeagleBone Black—Wireless (BeagleBoard.org Foundation, Oakland Charter Township, MI, USA) processing board has both WiFi and Bluetooth capabilities (802.11 b/g/n WiFi and Bluetooth 4.1 plus BLE), which are important for the goal of creating a completely portable system. This gives the BCI device the capability of communicating with any device that can be controlled remotely. In addition to communicating with WiFi-enabled devices, to remain completely portable, the device includes a user interface that communicates with the system through the available LabVIEW web service. For the design of this interface, HyperText Markup Language (HTML), Cascading Style Sheets (CSS), and Javascript (JS) were selected as the base languages for the interface, since they can be used for the creation of a cross-platform interface that can be accessed from any browser and display that can handle the computational demands of the system. Open-Loop Capabilities The BCI device can be used to collect and save raw data from a user according to an easily modifiable protocol. These data include five EEG channels, three EOG channels, and accelerometer data from the IMU. Due to the design considerations, the maximum sampling rate that can be achieved for raw data collection and saving is 80 Hz. To achieve this sampling rate, the system utilizes LabVIEW’s point-by-point virtual instruments and channel mechanisms. Sampling up to 80 Hz means future applications can be developed that require a spectral analysis of the Delta, Theta, Alpha, Beta, and lower Gamma frequency bands. EEG De-Noising Capabilities We implemented various real-time de-noising capabilities, including spline interpolation, low-pass filters, high-pass filters, and an H-Infinity adaptive noise cancellation filtering framework. Spline interpolation provides a mechanism to handle any lost data packets as well as the ability to maintain a constant sampling frequency, a requirement for accurate filtering. The low- and high-pass filters allow for the isolation of frequency bands, a method that can be used for the spectral analysis commonly found in EEG signal-processing paradigms. The H-infinity filter employs data collected from the three EOG sensors in the automatic real-time removal of eye
Docket No.2957092-000002-WO2 Filed December 23, 2024 movement and eye blink artifacts [5], which is one of the most common biological artifacts affecting EEG. In addition, it can detect and remove amplitude drifts and recording biases simultaneously [5]. A recent extension can identify and remove motion artifacts as well [6]. Closed-Loop Capabilities To test the closed-loop capabilities of the system, an example experimental protocol was implemented. This experimental protocol includes a real-time signal processing pipeline, training data collection, training of a machine learning model, testing of the trained model in real time, a graphical user interface (GUI), and constant communication with a third-party device. Due to design considerations, the system processes EEG and EOG data at 40 Hz and can save data at 20 Hz while simultaneously processing the signal, controlling a third-party WiFi device/object, and controlling a user interface over the web server. Sampling at up to 40 Hz supports applications that require a spectral analysis of the Delta, Theta, Alpha, and lower Beta bands. Further coding optimization effort could be made on the firmware design, which would likely allow for higher sampling frequencies. Modular Software Design While specific experimental protocols can influence the overall system software design, there are several key modules that will appear in many BCI systems. These common modules include an impedance check to assess the signal quality, a module for implementing the data- collection parameters and machine learning model training, a module to allow for user feedback through a survey mechanism, and a module for user help and troubleshooting. Additionally, as the system is designed to be used both inside and outside of a clinical setting, an extensive debugging user interface is necessary. Aesthetic Design of the User Interface—There were several aesthetic choices made during the user interface development that helped to further enhance the usability of the system. Colors and sizes were optimized to account for possible vision deficits by end-users. This includes large font sizes and components for those with poor vision and a color-blind-friendly design [65]. The development focused on hemianopia- and nystagmus-friendly design features, such as the button and icon designs, the logo position as a reference point, easing the cognitive workload, and creating a simple but appealing design [66]. Impedance Check—Ensuring signal quality involves measuring and displaying impedance values for the user so that, for electrodes that show high impedance values, the user can adjust the
Docket No.2957092-000002-WO2 Filed December 23, 2024 electrodes accordingly. Real-time display of these impedance values is therefore an essential module for BCI systems. Here, the module is designed to set up the amplifier, interpolate at a constant sampling frequency, filter at the prescribed subband range (as designated by the ADS1299 documentation), and send the resulting impedance values to the user interface in real time. Model Calibration—For applications that rely on machine learning model predictions for the acquisition of a control signal, training data must first be collected to train the machine learning model. The system allows for customization of the protocol for different BCI paradigms. Functionality has been built to allow for the acquisition of multiple days of data, which can then be used to train a machine learning model or monitor task performance and progress. As an initial machine learning model selection, the system includes a support vector machine (SVM) library (including hyperparameter optimization and n-fold cross validation), which the user can initiate from the user interface. Once the SVM model is trained, the user is then able to proceed with the model-testing stage. In addition to collecting EEG data for each testing trial, this module also collects protocol-specific characteristics, which can be analyzed later by a clinician or researcher to verify the progress of a user through a specific protocol. While only an SVM library has been developed, many types of machine learning model can be implemented in the device within the limits of the available onboard memory. Survey Collection—The proposed system includes a survey functionality that gives the user a way to provide feedback, which can be completed at any time. These results are stored onboard the processing unit for further analysis. This pop-up interface is presented in Section 3.3.1, which can be modified depending on the type of feedback desired for a particular application. Debugging Interface—For ease of use, significant effort was made in developing a debugging user interface. The device’s debugging interface, presented in Section 3.3.1, includes mechanisms to check whether the internal LabVIEW script is running, whether the web server is correctly activated, a signal-impedance check with a channel-selection mechanism, and a device- communication check. This provides the user with a series of simple steps that can be performed without guidance to address potential system faults. The debugging home screen provides the user with easy-to-understand descriptions of each debugging page to make troubleshooting as painless as possible.
Docket No.2957092-000002-WO2 Filed December 23, 2024 System Validation To demonstrate the features and functionality of the system, assessments were designed to validate three key areas: the headset design, the open-loop capability of the system, and the closed-loop capability of the system (see Table 2). Table 2. Bench testing and human-subject validation methodology. Headset Design Validation Test Name Description Assessment Tool/Specifications System Comfort Evaluation of user’s Questionnaire/Likert scale comfort level System Usability System Usability Scale SUS > 65 [67] (SUS) [28] Open-Loop BCI Validation Test Name Description Target Specifications Signal Quality Assessment of electrode and Impedance < 100 kOhm skin sensor impedance Eye Tracking EOG evaluation Detection of eye blinks and eye movements
Docket No.2957092-000002-WO2 Filed December 23, 2024 Table 2. Cont. Open-Loop BCI Validation Synchronized Acquire multi-modality data Synchronized EEG-EOG-IMU EEG-EOG-IMU streams to confirm recordings ≤ 4 ms synchronized streaming of data Open-loop BCI Assessment of EEG power Event-related Performance modulations in delta and mu desynchronization/synchronization bands during a (ERD/ERS) GO-NOGO task Closed-Loop Brain–Computer Interface Validation Test Name Description Target Specifications IoT Functionality Assess communication rates Communication rate < 50 ms for all between the headset and connected devices multiple types of devices SVM Model Evaluation of decoding Model accuracy ≥ 80%; detection Training accuracy for motor intent of MRCPs Closed-loop Evaluation of trained SVM for ≤50 ms closed-loop performance Performance online prediction of motor intent All tests were performed either at the University of Houston (UH) under a human-subjects protocol approved by the Institutional Review Board (IRB) at UH (IRB studies #3430 and #2515) or at the University of Texas Health Science Center of Houston (under IRB study HSC-MS-20- 1287). Five neurologically intact adults (four males and one female) were recruited and underwent a series of tests for validation of the headset design and open-loop BCI functionality. One 66-year- old male participant with chronic stroke, with hemiparisis on the left side of his body, participated in the validation of the closed-loop functionality during at-home use. All recruited participants gave their written informed consent prior to testing. Headset Design Validation
Docket No.2957092-000002-WO2 Filed December 23, 2024 Usability testing was conducted to validate the headset design. The testing focused on two key aspects: the overall participant comfort of the system during extended periods of use and the overall usability of the system based on the System Usability Scale (SUS) [28,68]. These tests were carried out with a diverse set of participants with varied head sizes, shapes, and hair types. Open-Loop Brain–Computer Interface Validation To evaluate the functionality of the BCI, a set of tests was performed that focused on the performance of the BCI in open-loop operations, impedance measurement, EOG measurement, and the synchronization of EOG, EEG, and head-movement data in real time. Closed-Loop Brain–Computer Interface Validation To assess the closed-loop capabilities of the system, an example deployment application from the neurorehabilitation literature was selected [17]. Specifically, a BCI–robot system, including an IoT-enabled robotic device and a tablet with a custom graphical user interface (GUI), is presented as an example of deployment in a neurorehabilitation application; see Figure 26. This specific implementation was chosen based on previous research on a closed-loop BCI for rehabilitation [17]. A BCI system for upper-limb rehabilitation after stroke that focused on detecting motor intent to control a motorized exoskeleton for the upper limb was developed. They achieved this by identifying a movement-related cortical potential (MRCP) that precedes voluntary movements of the upper limb (e.g., readiness potential). This type of cortical potential has been extensively studied [69–74] as a means of predicting motor intent. However, other brain features, such as changes in EEG rhythms, could be used to detect motor intent. An expensive high-density EEG system and a custom motorized upper-limb exoskeleton supervised by a team of trained technicians and physical therapists to conduct the clinical trial in stroke survivors was utilized. Encouraging clinical results were observed, with all participants showing sustained improvements in motor abilities following the cessation of the rehabilitation protocol. These positive outcomes, along with the necessity for increasing accessibility, usability, interoperability, and mobile deployment at home made this example application suitable for validation of the proposed BCI headset. In addition to the development of the system itself, this example deployment required the collection of data both in the clinic and at the participant’s home, which allowed for an assessment of the system’s usability outside of the clinic. Figure 26 shows a custom EEG-based BCI headset with wireless tablet-based (Fire 8, Amazon, Seattle, WA, USA) graphical user interface (GUI) and an IoT-enabled powered upper-limb
Docket No.2957092-000002-WO2 Filed December 23, 2024 exoskeleton robotic device (Rebless, H Robotics, Austin, TX, USA) deployed in a sample neurorehabilitation application, under an embodiment. Results Headset Design Validation Results In this section, we report the results from the system comfort and system usability scale assessments. These assessments were carried out to validate the final designs for the overall headset and the electrode holders. System Comfort Test Table 3 shows the system comfort results from five participants with a diverse range of head shapes, sizes, and hair types. Participants responded to the following questions: “Did the headset move during the study?”, “Did the headset cause the sensation of dents on your head?”, “Did the headset feel too big on your head?”, and “Did the headset feel too small on your head?”. The participants could choose from the following rating values: “Strongly Agree”, “Agree”, “Neutral”, “Disagree”, and “Strongly Disagree”. Although the overall level of comfort across the participants was high (e.g., 4.6/5 for three of the questions), two reported reduced comfort in one item due to the occurrence of feeling of dents on their scalp after two hours of use. During this assessment, it was confirmed that when a female participant whose head measurements matched the fifth percentile of female head sizes wore the headset, the headset was in its fully contracted state with a comfortable and secure fit. When repeating this assessment with a participant near the 95th percentile of male head circumference, the headband’s vertical sizing mechanism expanded 1.9 cm on both sides to accommodate the larger distance between the top of the head and the ears. Table 3. Comfort Score: 1: “Strongly Agree” to 5: “Strongly Disagree”. Participant # “Moving” “Dents” “Too Big” “Too Small” S1 5 5 5 5 S2 5 2 5 5 S3 4 2 3 3 S4 4 2 5 5 S5 5 3 5 5 Mean 4.6 2.8 4.6 4.6
Docket No.2957092-000002-WO2 Filed December 23, 2024 SD 0.548 1.304 0.894 0.894 System Usability Test The SUS [28] was used to assess the usability of the system. This metric has been employed previously in the assessment of usability for other BCI systems [67,75]. For the proposed system, the average SUS score among the five participants was 90.5, which is above the threshold (65) for an acceptable system [67]. All participants were able to independently and intuitively don the headset with only one hand. Open-Loop BCI Validation In this section, we report the results from the signal quality, EOG collection, IMU synchronization, and open-loop BCI assessments. Signal-Quality Test The impedance values from all electrodes were collected before and after the open-loop BCI test. The beginning and final impedance values for each electrode are presented in Figure 27. For all but two electrode impedance measurements, the electrode impedance values remained under 100 kΩ and for most electrodes they remained under 50 kΩ. Figure 27 shows Channel Impedance: Impedance values from the open-loop sessions for five participants. The values were taken before (blue) and after (orange) the session. The values are in kΩ, under an embodiment. Eye-Tracking Test In this test, we recorded eye blinks and horizontal and vertical eye movements from a center position using the GUI. Examples of eye blinks and tracking of eye movements, which were acquired at 80 Hz, are presented in Figure 28. Measurements of eye movements and eye blinks are critical for the identification and removal of ocular artifacts from EEGs in BCI systems or for use as additional signal sources for control. In our proposed system, H-infinity adaptive noise cancellation, an adaptive filtering technique that requires representations of the EOG signals, is implemented on board for real-time operation [5]. Figure 28A shows eye blinks information, under an embodiment. A participant (S4) was instructed to blink three times during a session. The plot shows the signal detected by the vertical EOG sensor. Figure 28B shows eye movements information, under an embodiment: The same
Docket No.2957092-000002-WO2 Filed December 23, 2024 participant was instructed to move her eyes left–to–right and right–to–left over a period of 15 s. The resulting plot shows the oscillating EEG due to these repetitive eye movements. Synchronized EEG–EOG–IMU Test The synchronized acquisition of EEG, EOG, and IMU data from the user’s head is important for characterizing head movement and the identification and removal of potential motion artifacts from the EEG signals [6]. Figure 29 depicts raster plots of EEG measurements acquired during conditions with (A) eyes closed, (B) eyes open, and (C) head movements collected at 80 Hz. A band-pass filter from 1 Hz to 50 Hz was applied to the signals and no additional de-noising methods were employed. Figure 29C depicts a raster plot showing synchronized EEG and IMU recordings during head movements towards the front, back, left, and right for one participant. As expected, the head motion, as displayed by the IMU channels (e.g., ACC and GYRO), coincides with motion artifact contamination of the EEG data. Additionally, as compared to the eyes-open and eyes-closed conditions, EEGs during head movement experience an increase in gamma activity due to EMG contamination, which matches the prior literature on EMG contamination during head movement (see Figure 29D) [76]. Figure 29D depicts the spectral characteristics of EEG during eyes-open, eyes-closed, and head-movement conditions. These spectral characteristics demonstrate the 1/f spectrum typical of EEG signals. Moreover, the EEG during the eyes-closed condition shows a modest increase in alpha (8–12 Hz) power as compared to the eyes-open condition [77] as the electrodes are positioned over motor areas rather than occipital areas where large alpha waves would be expected. Figure 29 shows characterizations of EEG in three task conditions, under an embodiment. (Figure 29A). Eyes Closed (EC): A participant was instructed to maintain eyes closed for a period of 8 s during the session. (Figure 29B). Eyes Open (EO): The participant maintained eyes open for a period of time. (Figure 29C). Head Movement (HM): The participant was asked to move the head towards the front, back, left, and right for a period of time. The resulting plot demonstrates correct synchronization of EEG and IMU data based on the resulting movement artifacts in the EEG signal. (Figure 29D). Spectral Comparison between EO, EC, and HM conditions. Open-Loop Performance To further assess the spectral characteristics of EEG, four participants underwent two blocks of 21 trials of a simple GO–NOGO paradigm. In this paradigm, the system’s user interface first asked the participant to fix their attention on a cross (NOGO) for five seconds. The user’s interface
Docket No.2957092-000002-WO2 Filed December 23, 2024 then presented a circle and indicated to the user to move their arm from a horizontal to a vertical position (GO). The expected spectral trend for a paradigm of this nature [78–80] would be that, when moving from NOGO to GO, the relative power in the µ band should increase while the relative power in the δ band should decrease. Figure 30 shows that the relative power in these two bands for all participants follow our expectations. The paired t-tests with Rest/Move factor for all electrodes, except FC
4, were significant (p < 0.0001): t(167) = 11.8, p = 9.3 x 10
-24 for δ, and t(167) = –9.3, p = 7.0 × 10
-17 for µ. Closed-Loop BCI Validation In this section, we report the findings from closed-loop BCI assessments, including IoT functionality and BCI decoder training and performance. The closed-loop BCI validation was designed based on the BCI–robot neurorehabilitation study described in [17,81] and tested on an individual diagnosed with chronic stroke. A significant difference is that testing of the participant was carried out first at the clinic and then at his home, as described below. Figure 30 shows spectrogram and relative power, under an embodiment. Figures A–D feature plots showing the average spectrogram for participants S1–S4 from 0.5 s before movement onset (MO) to 2 s after MO. Figure #shows average relative power in the δ and µ frequency bands among participants. The average is based on two blocks with twenty trials each, under an embodiment. IoT Functionality Test A general BCI system must be able to interact with a wide range of IoT-enabled devices. In this example deployment, the system’s communication rate via WiFi was verified in two ways: communication with a robot rehabilitation device and with several different WiFi-enabled tablets for the visual (GUI) display. In this test, communication with the rehabilitation device was found to remain under 50 ms. The displays and browsers tested include the iPhone (7+ or greater) and an Amazon Fire tablet with the Google Chrome, Microsoft Edge, and Amazon Silk browsers, with all tested browsers and displays able to maintain a communication rate under 25 ms. Figure 31 presents the GUI developed for this example deployment and the means to assess the real-time communication rate between the tablet and the system. Support Vector Machine Model Training Figure 32 presents the movement-related cortical potentials (MRCPs) recorded through the experimental protocol, which were then used to train the SVM neural decoding model. Table 4
Docket No.2957092-000002-WO2 Filed December 23, 2024 presents the decoding accuracies on S005’s data for a model trained with the hyperparameters displayed in Table 4, where the rejection rate refers to the amount of outliers in the data to be rejected for the training and validation of the model. All models were trained with four-fold cross- validation. Figure 31A shows a user-friendly interface that presents real-time impedance measurements, under an embodiment. Figure 31B shows an easy-to-use survey functionality for direct user feedback, under an embodiment. Figure 31C shows a debugging interface that can be used for troubleshooting of the system by the user, including a real-time metric for the communication rate between the system and the selected tablet. Figure 32 shows movement-related cortical potential, MRCP, under an embodiment. Following the protocol proposed by [17], we obtained the MRCP for participant S005. For each channel, the MRCPs were obtained from averaging 20 trials. The spatial average of those averages is the plot labeled “Average”. Channel FC3 was excluded due to its high impedance value for this participant. The vertical broken line represents the movement onset (MO). Table 4. Hyperparameter optimization: closed-loop model hyperparameter optimization using 4-fold cross-validation on participant S005’s data. Hyperparameter Optimization Rejection Rate Channels Not Used Accuracy 0 - 85.5% 0.1 - 97.4% 0.323 - 100.0% 0.3 - 100.0% 0 FC3 96.3% 0.1 FC3 98.6% 0.2 FC3 99.3% 0.3 FC3 99.1% Closed-Loop BCI Performance To assess whether the trained SVM could correctly predict motor intent during closed-loop BCI operation, the system was deployed during a series of GO (Move)–NOGO (Fixate) trials at the participant’s home after initial calibration in a clinical setting (Figure 33). For this test, the
Docket No.2957092-000002-WO2 Filed December 23, 2024 participant underwent two sessions per day, with each session consisting of three blocks of 20 trials over a period of six weeks and an average of six sessions per week. In Figure 34, we present signals classified by the trained model as representative of motor intent, where “Movement Intent” indicates when the model detected the participant’s motor intent using MRCPs. Each of these signals is the average of the 20 trials from the first block at the start of the protocol (in blue) and the last block at the end of the protocol (in orange). We can see here how the MRCP evolves across the six weeks of at-home BCI therapy for four of the five EEG channels (FC4, FC2, FCZ, FC1). This evolution is not evident in the case of FC3, which is the result of the relatively poor contact between that channel and the scalp of the participant, which had impedance values of greater than 100 kΩ). Figure 33 shows a closed-loop BCI–robot neurorehabilitation system in use at the home of the participant with chronic stroke, under an embodiment. Figure 34 shows average MRCP amplitudes at start and end of therapy, under an embodiment. The subplots present MRCPs across each of the five EEG electrodes recorded for participant S005 at start (block 1) and end (block 105) after six weeks of the at-home BCI therapy. Each MRCP is the result of averaging each of the 20 trials in each block. The vertical dotted line represents the moment movement intent (MI) was detected by the trained SVM machine learning model. Discussion and Conclusions The design and validation of a custom EEG-based closed-loop BCI headset with onboard processing capabilities has been presented in this report. The design criteria required the consideration of a number of factors. Here, we have developed a minimal viable solution to this design task that is low-cost, portable, wireless, and easy to use and has high interoperability. To ensure a comfortable user experience, the proposed solution has a form factor that provides a one- size-fits-all approach and includes a user-friendly graphical interface for use at home. Additionally, the system has real-time adaptive signal de-noising and decoding capabilities built into the onboard processing board, making the system fully contained within the headset, a feature not currently found in off-the-shelf commercially available systems. All components of the system have been extensively bench tested and also validated with healthy adults, including an individual with chronic stroke.
Docket No.2957092-000002-WO2 Filed December 23, 2024 In the development of the proposed system, the importance of understanding the cascading nature of single design decisions cannot be overstated. Early design decisions can significantly impact the available options for hardware and software functionality and overall system operation. For the current system, the most influential design choice was in the selection of LabVIEW as the back-end coding language. While LabVIEW has a large number of well-tested libraries available, many of these libraries require a processing board developed by National Instruments. Due to the cost of those boards, the selection of the processing board was limited by whether the board was capable of using an open-source user-built LabVIEW library, which is not as well-tested as the libraries developed by National Instruments. Many of the challenges faced in the development of the proposed system were due to incompatibilities between LabVIEW and the low-cost processing board. Careful selection of the high-level system components (such as the backend language, port selections, wireless protocol, etc.) are critical for maximizing the performance and flexibility. In this regard, and to show the flexibility of our proposed system, we have recently programmed the board in C++ and achieved an open-loop sampling rate of 250 Hz. In conclusion, the proposed system should provide an open test bed for developing low- cost and portable yet effective custom EEG-based closed-loop BCI systems with wireless capabilities, which will help expand the potential user base and application domains and increase the feasibility for academic research and workforce development. Example 2 Brain-Computer Interface (BCI) and Internet of Things (IoT) systems are amalgamated to create BCIoT, under an embodiment. Most of the early applications have focused on the healthcare sector, and more recently, in education, virtual reality, smart homes, and smart vehicles, amongst others. While there are many transversal developing stages that can be satisfied by a single system, no common enabling technology or standards exist. These challenges are address in the proposed platform, Brain-eNet. This technology was developed considering the constraints-space defined by BCIoT real-time mobile applications. This is expected to enable the development of BCIoT systems by providing modular hardware and software resources. Two instances of this platform implementation are provided, a motor intent detection for rehabilitation and an emotion recognition system.
Docket No.2957092-000002-WO2 Filed December 23, 2024 INTRODUCTION Since the term Internet of Things (IoT) was first used by Ashton in 2009 [1] defining it as the result of “adding radio-frequency identification and other sensors to everyday objects”, this field has been vastly growing and evolving to give shape to a more holistic definition given by Ng and Wakenshaw [2] “as a network of entities that are connected through any form of sensor, enabling these entities, which we term as Internet-connected constituents, to be located, identified, and even operated upon”. In a similar fashion, Brain-Computer Interfaces (BCI) have been evolving since their inception in 1973 through the work of Jacques Vidal [3]. The BCI term can be defined as an additional communication channel of the brain with the world using non-normal output “pathways of peripheral nerves and muscles” [4], [5]. From current BCI technologies, electroencephalography-based BCI (EEG BCI) is the more affordable and simple to implement outside the lab in most environments [5]. Thence, from this point forward, when we refer to BCI it is assumed that we are referring to EEG-BCI. Early proof-of-concept BCIoT applications have been developed in health care [6], smart homes [7], [8], virtual reality [9], [8], and smart vehicles [10], among others. In these applications, researchers usually collect data using off-the-shelf EEG headsets, which are transmitted to a computing device where it is processed to give commands to the specific end-effector (e.g., a computer, physical or virtual object(s), and even an avatar) [11]. This approach has some difficulties associated with the following: • Cost: This pertains to the EEG headsets, processing units, and cloud computing services required in BCIoT systems. • Reliability: The dependence on remote processing units increases latency and increases privacy issues risk. This indicates the need for the development of Edge computing to ensure robustness and reliability [12]. • Usability: To use these systems, a certain level of technical proficiency is typically required, resulting in a barrier for users who are not technologically inclined [13]. • Computational complexity: These systems often involve a substantial number of channels, which results in increased computational demands and complexity [14]. Additionally, an enabling platform necessitates the development of processing pipelines that exhibit
Docket No.2957092-000002-WO2 Filed December 23, 2024 computational efficiency. This will be more critical if the application considers wearable BCIs and mobility, as it is necessary their implementation in battery powered-embedded systems [15], [16]. • No real-time denoising: Most of the denoising algorithms employed in BCI systems are for offline use, thereby imposing limitations on the practical implementation of BCIoT systems in real-time [17]. • Context Augmentation: The application context can be leveraged to relax the constraints- space. Problems that are considered intractable can be solved by making the correct assumptions of the context [18], [19], [20]. Because of these challenges, the exponential growth in BCIoT applications has been thwarted. Some companies have tried to counteract the high cost (See Table I) of the BCI component. Nevertheless, none of these systems allow for any preprocessing or processing onboard, needing additional computing resources in-situ. TABLE I: Low cost solutions Product Number of Channels Price Muse 2 [21] IMU and 4 EEG $249.99

In this article, we proposed Brain-eNet, a BCIoT platform that addresses the above challenges and it is expected to become an enabling technology for BCIoT applications in medical and non-medical sectors. The article is organized as follows: Section 2 discusses the methodology. Section 3 presents two applications of the proposed system, and section 4 provides a discussion and concluding remarks. METHODS The product specifications considered in the design of an IoT-enabled BCI system included onboard de-noising capabilities to handle artifacts that contaminate the EEG, machine learning model calibration for neural classification, impedance measurement to assess signal quality, WiFi/Bluetooth connectivity for IoT, usability and flexibility for electrode locations to fit a spectrum of applications in the medical and non-medical sectors, including neural engineering research applications. Additional criteria have been summarized in [25], [26] and [27].
Docket No.2957092-000002-WO2 Filed December 23, 2024 Hardware Module The system is composed of a proprietary chip that is interfaced with an embedded platform, the BeagleBone Black - Wireless (BBB-W) [28] chosen due to its low-cost, compatibility, and wireless capabilities (Bluetooth and WiFi). TABLE II: Amplifier Specifications Amplifier Specifications Number of Channels 8

The amplifier of Brain-eNet is the ADS1299 chip from Texas Instruments, Inc. [29], specifications shown in Table II, which follows the technical specifications defined in [30] and standards considerations highlighted in [11]. In addition to sensing EEG and electrooculography (EOG) signals, the system also measures head motion data using an Inertial Measurement Unit (IMU, ICM-20948) chosen because of its low power consumption, low error, and it is equipped with a 3-axes gyroscope, accelerometer, and magnetometer. Specifications for the IMU are shown in Table III. TABLE III: Inertial Movement Unit Specifications Inertial Movement Unit Specifications s

Docket No.2957092-000002-WO2 Filed December 23, 2024 Firmware Module The firmware was developed using modularity as a core design principle so that the firmware toolkit can be easily used, adapted, and suitable for different BCIoT applications. The current programming language used for the module is C++ for communication with both the amplifier and IMU. The current communication protocol is Serial Peripheral Interface (SPI), a synchronous serial communication commonly used for short-distance communication. The current EEG, EOG, and IMU features include amplifier setup of channel amplification, measurement of impedance values from the electrode system, raw data collection, saving data into memory, or streaming it to the BBB-W for processing. The current sampling frequency of the system is 500 Hz in open-loop. However, the sampling frequency will be limited by the computing resources of the embedded platform and the specific application, for example for the second implementation (see Section III-D.2), where a camera and video processing are needed, the sampling frequency is on average 190 Hz. Table IV depicts the overall BCIoT specifications of the Brain-eNet, with embedded Bluetooth and WiFi communication modules. It can be programmed in multiple programming languages, it allows for onboard real-time de-noising of the signals and neural decoding. The current implementation is limited to applications with up to eight channels (any combination of dry EEG and EOG electrodes), excluding the reference electrodes. RESULTS Signal Acquisition The signal acquisition process starts with amplifier setup, gathering data at a sampling rate of 500 Hz (open-loop), converting the received hexadecimal values into voltage values, and subsequently organizing the trial data into files suitable for subsequent analysis or streaming to the BBB-W for processing. Figure 37 illustrates the synchronized collection of 5-EEG, 3-EOG, 3-axis gyroscope, and 3-axis accelerometer data. The five EEG channels are positioned over the sensorimotor cortex for movement intent detection. The signals shown have been passed through a fourth-order band-pass filter between 0.5 Hz and 20 Hz and plotted prior to the de-noising module. Figures 36A-36D show raster plot of synchronized EEG, EOG and IMU data from the proposed BCIoT system during eyes closed, head movement with open eyes, and during walking, under an embodiment. Figure 36A shows data related to eye movement, under an embodiment. A
Docket No.2957092-000002-WO2 Filed December 23, 2024 participant was instructed to close his eyes during a session. Figure 36B shows data related to blinking patterns, under an embodiment. A participant was instructed to blink his eyes 2 times, and the blinking artifacts can be observed around 2 sec and 9 sec; identified with green dashed lines. Figure 36C shows data related to head movement data, under an embodiment. The participant was instructed to move his head to the right (shown by a red dashed line), backward (purple dashed line), forward (blue dashed line), and to the left (yellow dashed line). Figure 36D shows data related to walking back and forth, under an embodiment. The participant was instructed to walk in one direction back and forth. The plot’s orange dashed line shows places where the participant changed direction (forward and backward). Impedance Measurement In dry-electrode EEG systems, monitoring good signal quality (high signal-to-noise ratio (SNR)) is critical as it is necessary to have good contact between the electrodes and the scalp or skin. [31]. This requires measuring and displaying impedance values so potential users can adjust electrodes that show high impedance accordingly. Onboard Denoising As discussed earlier, EEG suffers from low spatial resolution, low SNR, and artifacts such as eye blinks and eye movements, shifts in electrode potentials, and motion artifacts. Because of EEG’s signal properties, a BCIoT system should be able to process EEG and denoise the brain signals effectively. However, common signal-denoising methods, such as independent component analysis (ICA) are not generally applicable to mobile or real-time applications. Implemented capabilities of the Brain-eNet system include a high pass filter, H-Infinity Adaptive Noise Cancellation used for real-time eye artifact removal [17], and a low pass filter. Additionally, real- time adaptive motion artifact removal [32] using IMU data is under development. Deployment of Brain-eNet In this section, two implementations of BCIoT demonstrate the adaptability and flexibility of the proposed hardware across various applications with minimal modifications. Drawing from the initial experience, the software modules were subsequently reconfigured to align with the existing principles of modularity already implemented in the hardware. The second implementation shows some partial results. TABLE IV: Current Engineering Specifications of Brain-eNet Brain-Computer Interface Specifications

Docket No.2957092-000002-WO2 Filed December 23, 2024 Processor Speed 1 GHz Processor Memory 512 MB t. ,

Figure 37 shows a headset used in a NeuroEXO System, under an embodiment. In this instantiation, we have a BCIoT platform for rehabilitation. The system is dedicated to motor imagery in a BCI application with five (EEG) electrodes across the sensorimotor cortex, three EOG electrodes, and two reference electrodes. EOG electrodes are found in the foremost arms and the front band. The deployed hardware and software can be found in the posterior area of the headset inside the white translucent box. [33] NeuroEXO: An IoT-enabled BCI system for upper-limb motor rehabilitation: The first application where the hardware was deployed was the NeuroEXO system, a “Brain-controlled Upper-Limb Robot-Assisted Rehabilitation Device for Stroke Survivors” [33]. The application focuses on using an EEG-controlled robotic device, rebless (H Robotics) [34], for neural rehabilitation of the sensorimotor cortex. The clinical application required the use of five (EEG)
Docket No.2957092-000002-WO2 Filed December 23, 2024 comb electrodes located in FC3, FC1, FCz, FC2, and FC4 (according to the international 10–20 system), based on findings from a prior BCI clinical trial for upper-limb rehabilitation after stroke [35], [36]. Additionally, the system used three electrooculography (EOG) electrodes located on the user’s face at the right and left temple and above the left eye, to create a reference to remove eye artifacts from EEG signals using Adaptive Noise Cancellation algorithm based on H-infinity [17] [37]. Regarding signal processing, the system incorporates onboard capabilities for denoising and decoding to identify motor intent effectively. Onboard denoising encompasses the removal of EOG artifacts and bandpass filtering. To detect motor intent, the system utilizes a Support Vector Machine (SVM) model, which undergoes training and testing directly onboard. Furthermore, the system includes other requirements, such as the control of the robotic device and a web application. This web application, developed using Labview and LINX toolkit, facilitates the display of impedance, EEG, and EOG signals. Additionally, it provides a protocol for subjects to follow. The web application is hosted on the same BBB-W used for signal processing onboard, and the visualizations generated can be accessed from a tablet (Amazon Fire 8) or an iPhone (7 and onward). Notably, all the necessary processing for this application is executed locally onboard, eliminating the reliance on external computing resources. Figure 37 illustrates the headset developed. This system is currently undergoing clinical trials at the clinic and at home based on [35], [36]. In the current trials, each participant has one week of training at the clinic and 6 weeks at home training. Up to this point, 5 stroke survivors have been enrolled in this study. The system is currently being validated in a longitudinal study with healthy participants for potential use in non- medical applications. These studies’ results and details of implementation are beyond this paper’s scope and are partially reported in [33], [38]. Remote Health Monitoring: The system was adapted for an emotion recognition application requiring four EEG channels, FT7, T7, FT8, and T8, located over the temporal areas as suggested by [39], one EOG channel, and synchronized video recording for context awareness capabilities. Additional requirements included WiFi communication with an iOS mobile application developed in Swift, a Firebase database to save mobile app information and video context awareness data, and machine learning.
Docket No.2957092-000002-WO2 Filed December 23, 2024 The system required minimal changes to be implemented and integrated into the emotion recognition application, suggesting that the current hardware and software implementation can be easily adapted to other systems for research in IoT-BCI applications. Figures 38 and 39 show the version of the system for emotion recognition where four EEG electrodes are positioned on the inner lateral areas of the headset, the reference electrodes are located on the posterior arms of the headset, and the deployed system located in the box located in the back of the headset. Figures 38 and 39 show a Remote Health Monitoring headset, under an embodiment. Figure 40 shows a perspective view of the Remote Health Monitoring headset with camera, under an embodiment. Figures 41-46 shows standard orthographic views of the Remote Health Monitoring headset with camera, under an embodiment. This version of the system is dedicated to emotion recognition with 4 EEG electrodes 4602, 4604, 4608, 4610 in temporal areas, 1 EOG electrode 4612, 2 reference electrodes 4620, 4622, and a camera 4630 for context-awareness (See Figure 46). The deployed hardware and software can be found in the posterior area of the headset inside the gray box (similar to 112 in Figure 1). Under an embodiment, electrodes are 4602, 4604, 4608, 4610 comprise rotating electrodes (similar to 118 in Figure 2). Figure 47 shows a system for emotion recognition, under an embodiment. Figure 48 shows a system for emotion recognition, under an embodiment. DISCUSSION AND CONCLUSIONS The development of a customized BCI system capable of measuring EEG signals with real- time onboard processing capabilities presents a complex design challenge that necessitates careful consideration of various factors, including portability, usability, interoperability, and reliability. The proposed platform, Brain-eNet, aims to serve as an open test bed for creating cost-effective and portable yet highly efficient and reliable custom IoT-BCI systems. Brain-eNet has been successfully implemented in two real-time fully onboard processing applications, namely motor intent and emotion recognition, and it is anticipated to be applicable in other IoT-BCI implementations. The combination of all these discussed features renders Brain-eNet a self- contained system, which is currently absent from the existing commercial landscape.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Future work is needed to address various areas common to IoT systems, including energy harvesting techniques [40], cloud computing integration [41], Deep Learning algorithms [42], and cybersecurity measures [43]. Cybersecurity is of particular concern, wherein attacks on IoT systems have been observed, targeting specific aspects such as device vulnerabilities, location- based exploits, access level breaches, information damage potential, host reliability, protocol- related and layer-based vulnerabilities, among others [43]. Current implementations do not consider the aspect as it is out of scope. However, the authors are aware of their needs and expect to implement some of the available approaches in the near future. EXAMPLE 3 Studies have characterized emotions as a multivariable and subjective experience
1. This has led to a growing interest among neuroscientists in gaining a deeper understanding of how emotions are triggered and influenced by various stimuli. It is discussed that emotions have a significant impact on social interactions as they constitute a crucial component of communication, which is much more than just verbal or lexical symbols
2. Studies have explored the influence of emotions in organizational settings
3, as a result research suggests that emotions play an important role in building relationships
4. Discerning and understanding these emotions is part of emotional intelligence. This skill involves not only the ability to recognize one's own emotional states, but also to discern the emotions of others in order to accurately differentiate among them and utilize this information to guide actions
5. The development of this skill leads to collaboration, productivity, and stress management improvements
9,10. Emotion recognition can be used to assess emotional intelligence. This field covers a wide array of techniques and approaches including text-based emotion detection, sound-based recognition using voice analysis and speech-related cues, facial expressions from pictures and videos, body posture and gait analysis, heart rate, blood pressure, galvanic skin response (GSR), respiration rate (RR), electromyography (EMG), electrooculography (EOG) and brain imaging techniques. This last category includes functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), positron emission tomography (PET) and electroencephalography (EEG). These methods can be employed individually or in combination to create a multimodal approach for comprehensive emotion detection and analysis
6,7.
Docket No.2957092-000002-WO2 Filed December 23, 2024 In particular EEG is widely used due to its potential to offer a straightforward, cost- effective, portable, fast, and user-friendly approach for emotion detection
8,9. Its use can lead to an objectively recognition of human emotional states with great time resolution, enabling researchers to investigate phase variations in response to emotional stimuli
9. In the literature the overall protocol for emotion recognition studies consists of: Expose the user to a stimulus and record the EEG signals. These signals are cleaned from artifacts, the relevant features are extracted and used to train a machine learning (or AI) classifier
11 to recognize emotion
7. This is a complex task primarily because of the multivariate nature of the problem and the influence of the context, making it challenging to interpret accurately
10. To improve the accuracy of emotion detection, a promising strategy involves integrating multiple modalities, including self-report questionnaires, EEG signals, facial expressions
12, and contextual information. The latter can be achieved by cameras for situation awareness and event tagging. Our new invention addresses these gaps. Description of Emotion Recognition Headset There are three general parts of this system: the EEG Headset-Camera, the Smartphone/iPhone-SmartWatch and the Cloud Service. EEG Headset-Camera In one instantiation, electroencephalography (EEG) data is acquired through four electrodes positioned at FT7, T7, FT8, and T8 – sensor locations known to be important for emotion recognition. This data is then amplified and subsequently routed to a processing unit. Ensuring synchronized data collection with the video recording is crucial to accurately correlate EEG data with corresponding video frames. To achieve this, the video recording system is integrated into the EEG headset. Moreover, the same processing unit serves both EEG data collection and video recording. Our new device is unique in that the biometrics (EEG, EOG) are integrated with motion sensing and videorecordings, which provide contextual integration for situation awareness and event tagging. To maintain a sampling frequency exceeding 100Hz, essential for capturing Delta, Theta, Alpha, Beta, and a significant portion of Gamma waves, we implement multi-threading. This approach enables us to achieve a sampling frequency of over 180Hz in an open-loop system. This comprehensive process encompasses data collection and logging, saving EEG data in a CSV file and video recordings in an MP4 file. Importantly, the file type can be customized based
Docket No.2957092-000002-WO2 Filed December 23, 2024 on the user's preferences. It is noteworthy that the application of multi-threading for simultaneous EEG and video recording within a single system represents a novel approach. The data generated from a series of sessions is harnessed for training a machine learning algorithm based in EEG signals and user feedback, with the aim of enabling emotion recognition. The video data can also serve for manual emotion recognition, assisting in the identification of emotional triggers within the user's context. Moreover, augmenting the accuracy of predictions can be achieved by training an additional machine learning model designed to recognize emotions based on contextual information, potentially enhancing the EEG-based predictions. Under an embodiment, the emotion recognition headset utilizes a Support Vector Machine (SVM) for training a machine learning algorithm based in EEG signals and user feedback. Alternative embodiments may implement deep learning models such as convolution neural networks. Under an embodiment, processing hardware and software are located in a posterior area, i.e. in the rear electronics box (similar to 112 in Figure 1). A 3-axis accelerometer and a 36-axis gyroscope may be located in the electronics box. These sensors may be used with H∞ filter to remove movement noise from the signal while EOG sensors may also be used with H∞ filter to remove eye blink and eye movement noise. SmartPhone/iPhone SmartWatch In addition to recording EEG and video data, the individual's pulse is monitored using a smartwatch (Apple Watch) connected to their SmartPhone (iPhone). The SmartPhone (iPhone) uploads this information, along with the data received from the EEG headset, to the cloud. Simultaneously, this data is stored on the phone for future access via an app designed for user interaction with the system. Within this app, users can manage privacy settings, specify data capture frequency, and integrate additional health sensors into the system, for example, weather, GPS, heart rate, etc., thus augmenting the situational awareness (context). In another instantiation, the SmartPhone (e.g., iPhone) would connect with Google Home (HomeKit) or similar, enabling it to activate music or other devices aimed at enhancing the user's emotional state. Cloud Storage and Processing
Docket No.2957092-000002-WO2 Filed December 23, 2024 The data generated undergoes additional processing in the cloud using Deep Learning AI algorithms to derive both a user-specific model and a generalized model via Transfer Learning based on diverse user inputs. This generalized model serves as the starting point for each new user, continually adapting and customizing itself as fresh data becomes accessible. This generated data resides in the cloud, enabling remote access by a physician through an app that facilitates interaction with the user. An instantiation of the proposed system can be seen depicted in Figure 47. The proposed system aims to provide a holistic monitoring platform that delivers insights into various monitoring tasks, encompassing the user's cognitive, physical, and emotional states, stress and fatigue levels, and support for rehabilitation and neurological disease monitoring. The proposed system can be designed to integrate seamlessly with smartphones, much like Apple's iWatch. It will function as an extension of the iPhone, with dedicated apps available on the iOS platform that communicates with the headset via Bluetooth or Wi-Fi. This app will provide a user interface allowing them to control the headset, view data, and receive insights in real-time. This integration will also enable the use of Apple's ecosystem for health data aggregation and analytics, aligning with the CareKit and ResearchKit frameworks, as well as other apps like StravaTM for activity monitoring. The next-gen headset is lightweight with a sleek, ergonomic design that is comfortable for extended wear in various application domains (see Figures 37-46). Emotion detection has gained significant importance in recent years, drawing the attention of researchers, healthcare professionals, and technologists alike. Emotions are an integral part of our daily lives, influencing decision-making, social interactions, and overall well- being. The ability to accurately recognize and understand emotions holds the potential to transform various fields, including mental health, human-computer interaction, and personalized healthcare. It is also crucial to understand how emotions manifest in individuals, as this can help identify those who may be struggling with persistent negative emotions. For instance, individuals who have difficulty managing emotions like anxiety, sadness, or anger may find themselves trapped in a cycle that affects their daily lives. Early detection and timely intervention can provide the necessary support to break these cycles, helping to prevent further emotional distress and improving overall well-being.
Docket No.2957092-000002-WO2 Filed December 23, 2024 With advancements in technology, emotion detection has become more achievable than ever before. The use of tools such as EEG (electroencephalography), facial recognition, and video cameras enables us to capture emotions in real time with greater accuracy and reliability. As this field continues to evolve, the ability to understand and respond to emotions will play a pivotal role in shaping the future of technology and its integration into our daily lives. Figures 49-57 shows components of the headset shown in Figure 46, under an embodiment. (dimensions in millimeter). Figure 49 shows a perspective view of an exploded headset side panel, under an embodiment. This view shows elements 4602 and 4604 while opposing duplicate panel would feature 4608 and 4610. Figure 50 shows an inside side panel view of an exploded headset side panel, under an embodiment. Figure 51 shows a front view of an exploded headset side panel, under an embodiment. Figure 52 shows a side view of a reference electrode arm, under an embodiment. Figure 53 shows a front view of a reference electrode arm, under an embodiment. Figure 54 shows a perspective view of a reference electrode arm, under an embodiment. Figure 55 shows a bottom view the front facing portion of the headset, under an embodiment. This view shows an EOG electrode. This view also demonstrates that embodiments may also include an additional EOG electrode in an opposing position. Figure 56 shows a front view of view the front facing portion of the headset, under an embodiment. This view shows a camera. Figure 57 shows an exploded interior view of the front facing panel, under an embodiment. Under an embodiment, The proposed emotion recognition system have five electrodes located in the positions FP2 (EOG), FT7, FT8, T7, and T8 according to the 10-20 system, these electrode locations are carefully selected for their proximity to key brain regions associated with emotional processing, such as the prefrontal cortex, which plays a role in emotion regulation and valence differentiation, and the temporal lobes, which are involved in arousal and the recognition of emotional and social cues. Data collected from these electrodes undergo processing to extract meaningful features from the EEG signals; these key features include Power Spectral Density (PSD) ratios, such as Theta/Alpha, Beta/Alpha, and Gamma/Alpha. These ratios are essential for
Docket No.2957092-000002-WO2 Filed December 23, 2024 normalizing individual differences and improving the accuracy of emotion differentiation. Additionally, Differential Asymmetry (DASM) is calculated to measure differences in PSD between homologous left and right hemisphere electrodes, providing insights into hemispheric lateralization of emotions. The Frontal Asymmetry Index (FAI) will also be employed to assess valence-related asymmetry in the frontal regions, which is critical for distinguishing between positive and negative emotions. Dimensionality reduction techniques, such as t-SNE, further enhances computational efficiency by isolating the most relevant features while retaining critical information. For emotion classification, the system implements a deep learning model selected from Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), or Long Short-Term Memory (LSTM) networks, based on the data collected and performance evaluations. The system ensures superior accuracy and robustness compared to traditional machine learning algorithms like Support Vector Machines or Random Forests, making it highly effective for emotion classification tasks. The camera in the Emotions BCI system plays a crucial role in providing environmental context and analyzing external stimuli that may influence the user's emotional state. One of its key functions is environmental context analysis, where the camera captures visual elements in the user's surroundings, such as colors, lighting, or landscapes. Data collected from the camera are used to evaluate how external stimuli correlate with changes in emotional states detected via EEG signals. Additionally, the camera identifies visual content of emotional stimuli presented to the user, such as colors and shapes. By doing so, the system can refine its analysis by linking specific visual features, like high brightness or tones, to emotional states detected by the EEG. Once the model is trained, the system can use the camera to suggest activities or environments that align with the user’s emotional needs. For example, it may recommend calming environments with natural tones or energizing settings with vivid colors. The camera further enhances this process by verifying the user's exposure to the recommended stimuli, ensuring alignment with the suggested interventions and improving the overall effectiveness of the system. The system stands out for its unique ability to combine EEG data with contextual visual information while providing feedback to users through an app, creating a powerful emotion recognition platform. This multimodal integration allows it to capture how external factors, like colors, shapes, or lighting conditions, influence emotional states, offering deeper insights into the
Docket No.2957092-000002-WO2 Filed December 23, 2024 connection between a person’s surroundings and their internal emotional processes. Its portable and minimal setup makes it user-friendly, requiring only five strategically placed EEG sensors (FP2, FT7, FT8, T7, and T8) and a camera. The system also provides interactive feedback, enabling real-time emotion decoding with applications across frequency domains. For instance, in mental health it offers personalized emotional insights to promote well-being, and in human-computer interaction (HCI), it supports adaptive systems that dynamically respond to user emotions. In general, the system detects patterns, such as brain activity asymmetries and PSD ratios to predict emotions and deliver actionable recommendations at the time. Another differentiating feature is that the system can store data, allowing a specialist to review past instances of how the user's EEG data and exposures have changed over time. These capabilities can be part of non-invasive interventions, such as exposure to calming visuals or natural settings, to help improve mental health and overall emotional well-being. EXAMPLE 4 Under an embodiment, a MindSpring device provides a new version of the headset described above in the form of wearable earpiece. The MindSpring project creates a wearable health device designed to assist teenagers in managing stress through real-time brain-state monitoring and personalized music recommendations. The device integrates in-ear EEG sensors, a Bluetooth module, and a mobile application, providing a discreet and accessible solution for non-pharmacological stress management. By transmitting EEG data wirelessly to a mobile app for further analysis, the device enables neuroadaptive music therapy tailored to the user’s stress levels. The MindSpring device demonstrates the potential for innovative, technology-driven approaches to address mental health challenges in teenagers. The MindSpring device comprises a wearable health device that helps adolescents manage stress through real-time brain-state monitoring and personalized music-based interventions. This need arises from the growing prevalence of stress and mental health challenges among teenagers [1], who often lack accessible, non-pharmacological solutions for managing their well-being. The goal is to provide an effective, easy-to-use solution that empowers teenagers to manage stress in real-time, improving their emotional and mental health.
Docket No.2957092-000002-WO2 Filed December 23, 2024 The device wirelessly transmits EEG data to a mobile device and displays it within a mobile application. This targeted functionality shifts the emphasis from the broader wearable device concept to delivering seamless data transmission and real-time visualization. The design prioritizes being lightweight, wireless, and capable of operating for over four hours on a single charge. Figure 58 provides an overview of the MindSpring device system, under an embodiment. Product specifications: 1. Battery Life: The device operates for at least four hours on a single charge, ensuring sufficient usability throughout daily activities. 2. Data Transmission: A single Bluetooth Low Energy (BLE) connection synchronizes the units and transmit data to a smartphone in real-time. 3. Power Efficiency: A low-power mode is integrated to minimize battery consumption when the device is not in active use. 4. Data Collection: The product collects brain wave data through EEG sensors, supplemented by an inertial measurement unit (IMU) for enhanced accuracy in motion scenarios. 5. Mobile Application: A user-friendly mobile app displays the collected data in real-time, providing a seamless interface for the user. 6. Compact and Lightweight Design: Each wearable unit will weigh approximately four grams, ensuring comfort and ease of use. Engineering specifications: Earbud Specifications

Docket No.2957092-000002-WO2 Filed December 23, 2024 Bits 24 Minimum Sampling Rate 250 [Hz]
Docket No.2957092-000002-WO2 Filed December 23, 2024 Low Pass Filtering, High Pass Filtering, Denoising Capabilities Adaptive Motion Noise Cancellation

standards to ensure user safety. Additionally, the device maintains a compact and lightweight design, with each unit weighing no more than 4 grams, to ensure comfort and usability. The design prioritizes recyclable materials to reduce environmental impact. Under an embodiment, the attachment integrates seamlessly with Sony earbuds (Figure 59 and 60). Figure 61 shows a workflow for system processing and use of data, under an embodiment. Figure 61 illustrates the following: Digitize EEG Data: EEG signals, captured as analog outputs, are digitized. The digitized data reflects the brain signals while mitigating additional noise. Pairs to Phone: The Bluetooth module is able to pair to a mobile device. Software Verification: All software for the application is tested rigorously with artificial data and edge cases to ensure continued reliability with real data. Transmit via Bluetooth: For the wireless transmission of data, the device and system use Bluetooth.. Display Data via App: The mobile app displays real-time EEG data in a clear and user-friendly format. It is intuitive, with graphs and indicators that make the data easy to interpret. Manage Sensor Network: The microcontroller manages substantial data streams from both the EEG and IMU sensors. Ensuring that the microcontroller can effectively handle and process this data in real time, while managing the needs of the components themselves, is crucial for the success of our system. Wirelessly Transmit EEG Data:
Docket No.2957092-000002-WO2 Filed December 23, 2024 The primary goal is to capture EEG signals from the ear and transmit them wirelessly to a mobile device. Ensuring smooth data flow without drops or interference is critical. The system will be tested under various conditions to confirm its reliability and performance. The system provides the following capabilities and functionalities: ADC ensures a sampling rate of > 250 [Hz]. Confirm capacity to connect Bluetooth module to a mobile device. The module provides a data transfer rate of > 250 [Hz]. MCU provides a sampling rate of > 250 [Hz] from the ADC and IMU. References Example 1 1. Kübler, A. The history of BCI: From a vision for the future to real support for personhood in people with locked-in syndrome. Neuroethics 2020, 13, 163–180. [CrossRef] 2. Nijholt, A.; Contreras-Vidal, J.; Jeunet, C.; Väljamäe, A. Brain-Computer Interfaces for Non-clinical (Home, Sports, Art, Entertainment, Education, Well-Being) Applications. Front. Comput. Sci.2022, 4, 860619. [CrossRef] 3. Urigüen, J.; Garcia-Zapirain, B. EEG artifact removal—State-of-the-art and guidelines. J. Neural Eng.2015, 12, 031001. [CrossRef] [PubMed] 4. Goldenholz, D.; Ahlfors, S.; Hämäläinen, M.; Sharon, D.; Ishitobi, M.; Vaina, L.; Stufflebeam, S. Mapping the signal-to-noise-ratios of cortical sources in magnetoencephalography and electroencephalography. Hum. Brain Mapp. 2009, 30, 1077–1086. [CrossRef] 5. Kilicarslan, A.; Grossman, R.G.; Contreras-Vidal, J.L. A robust adaptive de-noising framework for real-time artifact removal in scalp eeg measurements. J. Neural Eng.2016, 13, 026013. [CrossRef] 6. Kilicarslan, A.; Vidal, J. Characterization and real-time removal of motion artifacts from EEG signals. J. Neural Eng.2019, 16, 056027. [CrossRef] [PubMed] 7. Craik, A.; He, Y.; Contreras-Vidal, J.L. Deep learning for electroencephalogram (eeg) classification tasks: A review. J. Neural Eng.2019, 16, 031001. [CrossRef]
Docket No.2957092-000002-WO2 Filed December 23, 2024 Roy, Y.; Banville, H.; Albuquerque, I.; Gramfort, A.; Falk, T.; Faubert, J. Deep learning- based electroencephalography analysis: A systematic review. J. Neural Eng. 2019, 16, 051001. [CrossRef]

Abiri, R.; Borhani, S.; Sellers, E.; Jiang, Y.; Zhao, X. A review of EEG- based brain—Computer interface paradigms. J. Neural Eng.2019, 16, 011001. [CrossRef] Steele, A.G.; Parekh, S.; Azgomi, H.F.; Ahmadi, M.B.; Craik, A.; Pati, S.; Francis, J.T.; Contreras-Vidal, J.L.; Faghih, R.T. A mixed filtering approach for real-time seizure state tracking using multi-channel electroencephalography data. IEEE Trans. Neural Syst. Rehabil. Eng.2021, 29, 2037–2045. [CrossRef] [PubMed] Ahmadi, M.B.; Craik, A.; Azgomi, H.F.; Francis, J.T.; Contreras-Vidal, J.L.; Faghih, R.T. Real-time seizure state tracking using two channels: A mixed-filter approach. In Proceedings of the 201953rd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 3–6 November 2019; pp.2033–2039. Aboalayon, K.A.I.; Faezipour, M.; Almuhammadi,W.S.; Moslehpour, S. Sleep stage classification using eeg signal analysis: A comprehensive survey and new investigation. Entropy 2016, 18, 272. [CrossRef] Zhou, Y.; Huang, S.; Xu, Z.; Wang, P.; Wu, X.; Zhang, D. Cognitive workload recognition using eeg signals and machine learning: A review. IEEE Trans. Cogn. Dev. Syst.2021, 14, 799–818. [CrossRef] Craik, A.; Kilicarslan, A.; Contreras-Vidal, J.L. Classification and transfer learning of eeg during a kinesthetic motor imagery task using deep convolutional neural networks. In Proceedings of the 201941st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 3046– 3049. Ferrero, L.; Quiles, V.; Ortiz, M.; Iáñez, E.; Navarro-Arcas, A.; Flores-Yepes, J.; Contreras-Vidal, J.; Azorín, J. Comparison of different brain–computer interfaces to assess motor imagery using a lower-limb exoskeleton. In Converging Clinical and Engineering Research on Neurorehabilitation IV, Proceedings of the 5th International Conference on Neurorehabilitation (ICNR2020), Online, 13–16 October 2020; Springer: Berlin/Heidelberg, Germany, 2022; pp.53–58.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Bundy, D.T.; Souders, L.; Baranyai, K.; Leonard, L.; Schalk, G.; Coker, R.; Moran, D.W.; Huskey, T.; Leuthardt, E.C. Contralesional brain—Computer interface control of a powered exoskeleton for motor recovery in chronic stroke survivors. Stroke 2017, 48, 1908–1915. [CrossRef] Bhagat, N.A.; Yozbatiran, N.; Sullivan, J.L.; Paranjape, R.; Losey, C.; Hernandez, Z.; Keser, Z.; Grossman, R.; Francisco, G.; O’Malley, M.K.; et al. A clinical trial to study changes in neural activity and motor recovery following brain-machine interface enabled robot-assisted stroke rehabilitation. medRxiv 2020. [CrossRef] Nijholt, A. Multi-modal and multi-brain-computer interfaces: A review. In Proceedings of the 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2–4 December 2015; pp.1–5. Hekmatmanesh, A.; Nardelli, P.H.; Handroos, H. Review of the state-of-the-art of brain- controlled vehicles. IEEE Access 2021, 9, 110173–110193. [CrossRef] Shukla, P.K.; Chaurasiya, R.K. A review on classification methods used in eeg-based home control systems. In Proceedings of the 20183rd International Conference and Workshops on Recent Advances and Innovations in Engineering (ICRAIE), Jaipur, India, 22–25 November 2018; pp.1–5. Zhang, R. Virtual reality games based on brain computer interface. In Proceedings of the 2020 International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI), Sanya, China, 4–6 December 2020; pp.227–230. Kerous, B.; Liarokapis, F. Brain-computer interfaces-a survey on interactive virtual environments. In Proceedings of the 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (vs.-Games), Barcelona, Spain, 7–9 September 2016; pp.1–4. Craik, A.; Kilicarslan, A.; Contreras-Vidal, J.L. A translational roadmap for a brain- machine-interface (bmi) system for rehabilitation. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp.3613–3618. Merchak, T.; Goldberg, I. Concept to Clinic: Commercializing Innovation (C3i) Program; The National Institute of Biomedical Imaging and Bioengineering (NIBIB): Bethesda, MD, USA, 2021.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Paek, A.; Brantley, J.; Ravindran, A.; Nathan, K.; He, Y.; Eguren, D.; Cruz-Garza, J.; Nakagome, S.; Wickramasuriya, D.; Chang, J.; et al. A roadmap towards standards for neurally controlled end effectors. IEEE Open J. Eng. Med. Biol.2021, 2, 84–90. [CrossRef] Bowsher, K.; Civillico, E.; Coburn, J.; Collinger, J.; Contreras-Vidal, J.; Denison, T.; Donoghue, J.; French, J.; Getzoff, N.; Hochberg, L.; et al. Others Brain–computer interface devices for patients with paralysis and amputation: A meeting report. J. Neural Eng.2016, 13, 023001. [CrossRef] Mooney, J. Strategies for supporting application portability. Computer 1990, 23, 59–70. [CrossRef] Brooke, J. SUS-A Quick and Dirty Usability Scale. Usability Evaluation in Industry; CRC Press: Boca Raton, FL, USA, 1996; ISBN 9780748404605. Available online: https://www.crcpress.com (accessed on 15 October 2022). Bevana, N.; Kirakowskib, J.; Maissela, J. What is usability. In Proceedings of the 4th International Conference on HCI, Stuttgart, Germany, 1–6 September 1991; pp.1–6. Stavrakos, S.; Ahmed-Kristensen, S. Assessment of anthropometric methods in headset design. In Proceedings of the DESIGN 2012, The 12th International Design Conference (DS 70), Dubrovnik, Croatia, 21–24 May 2012. Chabin, T.; Gabriel, D.; Haffen, E.; Moulin, T.; Pazart, L. Are the new mobile wireless EEG headsets reliable for the evaluation of musical pleasure? PLoS ONE 2020, 15, e0244820. [CrossRef] Jiang, X.; Bian, G.; Tian, Z. Removal of artifacts from EEG signals: A review. Sensors 2019, 19, 987. [CrossRef] He, Y.; Eguren, D.; Azorín, J.M.; Grossman, R.G.; Luu, T.P.; Contreras-Vidal, J.L. Brain– machine interfaces for controlling lower-limb powered robotic systems. J. Neural Eng. 2018, 15, 021004. [CrossRef] [PubMed] Open Source Tools for Neuroscience. Available online: https://openbci.com/ (accessed on 2 February 2023). Meditation Made Easy. 2023. Available online: https://choosemuse.com/ (accessed on 2 February 2023).
Docket No.2957092-000002-WO2 Filed December 23, 2024 Xing, X.; Wang, Y.; Pei, W.; Guo, X.; Liu, Z.; Wang, F.; Ming, G.; Zhao, H.; Gui, Q.; Chen, H. A high-speed SSVEP-based BCI using dry EEG electrodes. Sci. Rep. 2018, 8, 14708. [CrossRef] [PubMed] Arpaia, P.; Callegaro, L.; Cultrera, A.; Esposito, A.; Ortolano, M. Metrological characterization of consumer-grade equipment for wearable brain—Computer interfaces and extended reality. IEEE Trans. Instrum. Meas.2021, 71, 4002209. [CrossRef] Fok, S.; Schwartz, R.; Wronkiewicz, M.; Holmes, C.; Zhang, J.; Somers, T.; Bundy, D.; Leuthardt, E. An EEG-based brain computer interface for rehabilitation and restoration of hand control following stroke using ipsilateral cortical physiology. In Proceedings of the 2011 Annual International Conference of The IEEE Engineering In Medicine And Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp.6277–6280. EMOTIV Emotiv EPOC X: 14 Channel Mobile EEG Headset. EMOTIV.2023. Available online: https://www.emotiv.com/epoc-x/ (accessed on 2 February 2023). Schalk, G.; McFarl, T.; Birbaumer, N.;Wolpaw, J. BCI2000: A general-

purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 2004, 51, 1034–1043. [CrossRef] [PubMed] Niso, G.; Romero, E.; Moreau, J.T.; Araujo, A.; Krol, L.R.Wireless eeg: An survey of systems and studies. NeuroImage 2022, 269, 119774. [CrossRef] ISO 9241-210:2010; Ergonomics of Human-System Interaction—Part 210: Human- Centred Design for Interactive Systems. ISO: Geneva, Switzerland, 2010. Feng, T.; Kuhn, D.; Ball, K.; Kerick, S.; McDowell, K. Evaluation of a Prototype Low- Cost, Modular, Wireless Electroencephalography (Eeg) Headset Design for Widespread Application; Army Research Lab.: Adelphi, MD, USA, 2016. Hairston, W.D.; Whitaker, K.W.; Ries, A.J.; Vettel, J.M.; Bradford, J.C.; Kerick, S.E.; McDowell, K. Usability of four commercially-oriented eeg systems. J. Neural Eng.2014, 11, 046018. [CrossRef] Verwulgen, S.; Lacko, D.; Vleugels, J.; Vaes, K.; Danckaers, F.; Bruyne, G.D.; Huysmans, T. A new data structure and workflow for using 3d anthropometry in the design of wearable products. Int. J. Ind. Ergon.2018, 64, 108–117. [CrossRef] Li, G.;Wang, S.; Duan, Y.Y. Towards gel-free electrodes: A systematic study of electrode- skin impedance. Sens. Actuators Chem.2017, 241, 1244–1255. [CrossRef]
Docket No.2957092-000002-WO2 Filed December 23, 2024 Pheasant, S.; Haslegrave, C.M. Bodyspace: Anthropometry, Ergonomics and the Design of Work; CRC Press: Boca Raton, FL, USA, 2018. Lacko, D.; Vleugels, J.; Fransen, E.; Huysmans, T.; Bruyne, G.D.; Hulle, M.M.V.; Sijbers, J.; Verwulgen, S. Ergonomic design of an eeg headset using 3d anthropometry. Appl. Ergon.2017, 58, 128–136. [CrossRef] Ellena, T.; Subic, A.; Mustafa, H.; Pang, T.Y. The helmet fit index—An intelligent tool for fit assessment and design customisation. Appl. Ergon.2016, 55, 194–207. [CrossRef] Karlson, A.K.; Bederson, B.B.; Contreras-Vidal, J. Understanding single-handed mobile device interaction. Handb. Res. User Interface Des. Eval. Mob. Technol.2006, 1, 86–101. Yates, D.C.; Rodriguez-Villegas, E. A key power trade-off in wireless eeg headset design. In Proceedings of the 2007 3rd International IEEE/EMBS Conference on Neural Engineering, Kohala Coast, HI, USA, 2–5 May 2007; pp.453–456. Acharya, J.N.; Hani, A.J.; Thirumala, P.; Tsuchida, T.N. American clinical neurophysiology society guideline 3: A proposal for standard montages to be used in clinical eeg. Neurodiagn. J.2016, 56, 253–260. [CrossRef] [PubMed] Wang, D.;Miao, D.; Blohm, G.Multi-class motor imagery EEG decoding for brain- computer interfaces. Front. Neurosci.2012, 6, 151. [CrossRef] [PubMed] Chen, Y.-H.; de Beeck, M.O.; Vanderheyden, L.; Mihajlovic, V.; Grundlehner, B.; Hoof, C.V. Comb-shaped polymer-based dry electrodes for eeg/ecg measurements with high user comfort. In Proceedings of the 201335th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 551–554. Kawana, T.; Yoshida, Y.; Kudo, Y.; Iwatani, C.; Miki, N. Design and characterization of an eeg-hat for reliable eeg measurements. Micromachines 2020, 11, 635. [CrossRef] [PubMed] Gorman, N.; Louw, A.; Craik, A.; Gonzalez, J.; Feng, J.; Contreras-Vidal, J.L. Design principles for mobile brain-body imaging devices with optimized ergonomics. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Virtually, 25–29 July 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp.3–10. Gordon, C.C.; Blackwell, C.L.; Bradtmiller, B.; Parham, J.L.; Barrientos, P.; Paquette, S.P.; Corner, B.D.; Carson, J.M.; Venezia, J.C.; Rockwell, B.M.; et al. 2012
Docket No.2957092-000002-WO2 Filed December 23, 2024 Anthropometric Survey of US Army Personnel: Methods and Summary Statistics; Army Natick Soldier Research Development and Engineering Center: Natick, MA, USA, 2014. Meet Beagle™: Open Source Computing. Available online: https://beagleboard.org/ (accessed on 2 February 2023). Linx Labview Makerhub. Available online: https://www.labviewmakerhub.com/doku.php?id=learn:start (accessed on 2 February 2023). Nuwer, M.; Comi, G.; Emerson,

Ikeda, A.; Luccas, F.; Rappelsburger, P. IFCN standards for digital recording of clinical EEG. Electroencephalogr. Clin. Neurophysiol.1998, 106, 259–261. [CrossRef] [PubMed] Texas Instruments. ADS1299 Analog-to-Digital Converter for EEG and Biopotential Measurements; Texas Instruments: Dallas, TX, USA, 2017. TDK. ICM-20948 World’s Lowest Power 9-Axis MEMS MotionTracking™ Device; TDK InvenSense: San Jose, CA, USA, 2021. Louis. Polymer Lithium-ion Battery Product Specification; AA Portable Power Corp: Richmond, CA, USA, 2010. Octavo Systems LLC. OSD3358 Application Guide; Octavo Systems LLC: Austin, TX, USA, 2017. Jenny, B.; Kelso, N.V. Color design for the color vision impaired. Cartogr. Perspect.2007, 61–67. [CrossRef] Allan, J. Accessibility Requirements for People with Low Vision.2016. Available online: https://www.w3.org/TR/low-visionneeds/ (accessed on 2 February 2023). Zander, T.O.; Andreessen, L.M.; Berg, A.; Bleuel, M.; Pawlitzki, J.; Zawallich, L.; Krol, L.R.; Gramann, K. Evaluation of a dry eeg system for application of passive brain- computer interfaces in autonomous driving. Front. Hum. Neurosci. 2017, 11, 78 . [CrossRef] Bangor, A.; Kortum, P.; Miller, J. Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Stud.2009, 4, 114–123. Hallett, M. Movement-related cortical potentials. Electromyogr. Clin. Neurophysiol.1994, 34, 5–13.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Lu, M.; Arai, N.; Tsai, C.; Ziemann, U. Movement related cortical potentials of cued versus self-initiated movements: Double dissociated modulation by dorsal premotor cortex versus supplementary motor area rTMS. Hum. Brain Mapp.2012, 33, 824–839. [CrossRef] Niazi, I.; Jiang, N.; Tiberghien, O.; Nielsen, J.; Dremstrup, K.; Farina, D. Detection of movement intention from single-trial movement-related cortical potentials. J. Neural Eng. 2011, 8, 066009. [CrossRef] [PubMed] Siemionow, V.; Yue, G.; Ranganathan, V.; Liu, J.; Sahgal, V. Relationship between motor activity-related cortical potential and voluntary muscle activation. Exp. Brain Res. 2000, 133, 303–311. [CrossRef] [PubMed] Shibasaki, H.; Barrett, G.; Halliday, E.; Halliday, A. Components of the movement-related cortical potential and their scalp topography. Electroencephalogr. Clin. Neurophysiol. 1980, 49, 213–226. [CrossRef] [PubMed] Shakeel, A.; Navid, M.; Anwar, M.; Mazhar, S.; Jochumsen, M.; Niazi, I. A review of techniques for detection of movement intention using movement-related cortical potentials. Comput. Math. Methods Med.2015, 2015, 346217. [CrossRef] Duvinage, M.; Castermans, T.; Petieau, M.; Seetharaman, K.; Hoellinger, T.; Cheron, G.; Dutoit, T. A subjective assessment of a p300 bci system for lower-limb rehabilitation purposes. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp.3845–3849. Goncharova, I.; McFarl, ; D.; Vaughan, T.; Wolpaw, J. EMG contamination of EEG: Spectral and topographical characteristics. Clin. Neurophysiol. 2003, 114, 1580–1593. [CrossRef] Barry, R.; Clarke, A.; Johnstone, S.; Magee, C.; Rushby, J. EEG differences between eyes- closed and eyes-open resting conditions. Clin. Neurophysiol. 2007, 118, 2765–2773. [CrossRef] Kuhlman, W.N. Functional topography of the human mu rhythm. Electroencephalogr. Clin. Neurophysiol.1978, 44, 83–93. [CrossRef] Müller, V.; Anokhin, A.P. Neural synchrony during response production and inhibition. PLoS ONE 2012, 7, e38931. [CrossRef]
Docket No.2957092-000002-WO2 Filed December 23, 2024 80. Schoppenhorst, M.; Brauer, F.; Freund, G.; Kubicki, S. The significance of coherence estimates in determining central alpha and mu activities. Electroencephalogr. Clin. Neurophysiol.1980, 48, 25–33. [CrossRef] 81. Bhagat, N.A.; Venkatakrishnan, A.; Abibullaev, B.; Artz, E.J.; Yozbatiran, N.; Blank, A.A.; French, J.; Karmonik, C.; Grossman, R.G.; O’Malley, M.K.; et al. Design and optimization of an eeg-based brain machine interface (bmi) to an upper-limb exoskeleton for stroke survivors. Front. Neurosci.2016, 10, 122. [CrossRef] Example 2 [1] K. Ashton et al., “That ‘internet of things’ thing,” RFID journal, vol. 22, no. 7, pp. 97–114, 2009. [2] I. C. Ng and S. Y. Wakenshaw, “The internet-of-things: Review and research directions,” International Journal of Research in Marketing, vol.34, no.1, pp.3–21, 2017. [Online]. [3] J. J. Vidal, “Toward direct brain-computer communication,” Annual review of Biophysics and Bioengineering, vol.2, no.1, pp.157–180, 1973. [4] J. R. Wolpaw, D. J. McFarland, G. W. Neat, and C. A. Forneris, “An eeg-based brain-computer interface for cursor control,” Electroencephalography and clinical neurophysiology, vol.78, no.3, pp.252–259, 1991. [5] J. R. Wolpaw, N. Birbaumerc, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain– computer interfaces for communication and control,” Clinical Neurophysiology, vol.113, pp.767– 791, 2002. [6] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. d. R. Millan, “Towards independence: a bci telepresence robot for people with severe motor disabilities,” Proceedings of the IEEE, vol.103, no.6, pp.969–982, 2015. [7] D. R. Edla, K. Mangalorekar, G. Dhavalikar, and S. Dodia, “Classification of eeg data for human mental state analysis using random forest classifier,” Procedia computer science, vol.132, pp.1523–1532, 2018. [8] X. Zhang, L. Yao, S. Zhang, S. Kanhere, M. Sheng, and Y. Liu, “Internet of things meets brain– computer interface: A unified deep learning framework for enabling human-thing cognitive interactivity,” IEEE Internet of Things Journal, vol.6, no.2, pp.2084–2092, 2018. [9] C. G. Coogan and B. He, “Brain-computer interface control in a virtual reality environment and applications for the internet of things,” IEEE Access, vol.6, pp.10840–10849, 2018.
Docket No.2957092-000002-WO2 Filed December 23, 2024 [10] A. Balaji, U. Tripathi, V. Chamola, A. Benslimane, and M. Guizani, “Toward safer vehicular transit: implementing deep learning on single channel eeg systems for microsleep detection,” IEEE Transactions on Intelligent Transportation Systems, 2021. [11] A. Y. Paek, J. A. Brantley, A. Sujatha Ravindran, K. Nathan, Y. He, D. Eguren, J. G. Cruz- Garza, S. Nakagome, D. S. Wickramasuriya, J. Chang, M. Rashed-Al-Mahfuz, M. R. Amin, N. A. Bhagat, and J. L. Contreras-Vidal, “A roadmap towards standards for neurally controlled end effectors,” IEEE Open Journal of Engineering in Medicine and Biology, vol.2, pp.84–90, 2021. [12] P. Hu, S. Dhelim, H. Ning, and T. Qiu, “Survey on fog computing: architecture, key technologies, applications and open issues,” Journal of Network and Computer Applications, vol. 98, pp. 27–42, 2017. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1084804517302953 [13] W. D. Hairston, K. W. Whitaker, A. J. Ries, J. M. Vettel, J. C. Bradford, S. E. Kerick, and K. McDowell, “Usability of four commercially-oriented eeg systems,” Journal of neural engineering, vol.11, no.4, p.046018, 2014. [14] T. Alotaiby, F. E. A. El-Samie, S. A. Alshebeili, and I. Ahmad, “A review of channel selection algorithms for eeg signal processing,” EURASIP Journal on Advances in Signal Processing, vol. 2015, pp.1–21, 2015. [15] T. A. Henzinger and J. Sifakis, “The embedded systems design challenge,” in International Symposium on Formal Methods. Springer, 2006, pp.1–15. [16] J. J. G. Espana, J. A. J. J. Builes, and J. W. B. Bedoya, “Peniel method ˜ for the automation of the ultrasonic monitoring based on the acoustic impedance,” in 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2013, pp.1850–1856. [17] A. Kilicarslan, R. G. Grossman, and J. L. Contreras-Vidal, “A robust adaptive denoising framework for real-time artifact removal in scalp eeg measurements,” Journal of neural engineering, vol.13, no.2, p.026013, 2016. [18] E. J. Gonzalez, C. Luo, A. Shrivastava, K. Palem, Y. Moon, S. Noh, D. Park, and S. Hong, “Location detection for navigation using imus with a map through coarse-grained machine learning,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. IEEE, 2017, pp.500–505.
Docket No.2957092-000002-WO2 Filed December 23, 2024 [19] C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. McClosky, “The stanford corenlp natural language processing toolkit,” in Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, 2014, pp.55–60. [20] M. Sonka, V. Hlavac, and R. Boyle, Image processing, analysis, and machine vision. Cengage Learning, 2014. [21] “Meditation made easy,” Jan 2023. [Online]. Available: https://choosemuse.com/ [22] “Emotiv insight 2.” [Online]. Available: https://www.emotiv.com/insight/ [23] “Open source tools for neuroscience.” [Online]. Available: https://openbci.com/ [24] “Neurosky mindwave 2.” [Online]. Available: https://neurosky.com/biosensors/eeg- sensor/biosensors/ [25] P. K. Shukla and R. K. Chaurasiya, “A review on classification methods used in eeg-based home control systems,” in 20183rd International Conference and Workshops on Recent Advances and Innovations in Engineering (ICRAIE). IEEE, 2018, pp.1–5. [26] T. Feng, D. Kuhn, K. Ball, S. Kerick, and K. McDowell, “Evaluation of a prototype low-cost, modular, wireless electroencephalography (eeg) headset design for widespread application,” Army Research Lab Aberdeen Proving Ground United Kingdom, Tech. Rep., 2016. [27] A. Craik, A. Kilicarslan, and J. L. Contreras-Vidal, “A translational roadmap for a brain- machine-interface (bmi) system for rehabilitation,” in 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2019, pp.3613–3618. [28] “Meet beagle™: Open source computing.” [Online]. Available: https://beagleboard.org/ [29] TI, ADS1299 Analog-to-Digital Converter for EEG and Biopotential Measurements, Texas Instruments, 12017, rev.2. [30] M. Nuwer, G. Comi, R. Emerson, A. Fuglsang-Frederiksen, J. Guerit, H. Hinrichs, A. Ikeda, F. Luccas, and P. Rappelsburger, “IFCN standards for digital recording of clinical EEG,” Electroencephalography and Clinical Neurophysiology, vol.106, no.3, pp.259–261, MAR 1998. [31] G. Li, S. Wang, and Y. Y. Duan, “Towards gel-free electrodes: A systematic study of electrode-skin impedance,” Sensors and Actuators B: Chemical, vol.241, pp.1244–1255, 2017. [32] A. Kilicarslan, , and J. L. Contreras-Vidal, “Characterization and real-time removal of motion artifacts from eeg signals,” Journal of neural engineering, vol.16, p.056027, 2019.
Docket No.2957092-000002-WO2 Filed December 23, 2024 [33] A. Craik, J. J. Gonzalez-Espa ´ na, A. Alamir, D. Edquilang, S. Wong, ˜ L. Sanchez Rodr ´ ´ıguez, J. Feng, G. E. Francisco, and J. L. Contreras-Vidal, “Design and validation of a low-cost mobile eeg-based brain–computer interface,” Sensors, vol.23, no.13, p.5930, 2023. [34] “Smart rehabilitation solution rebless.” [Online]. Available: https://hroboticsus.com/smart- rehab-devices/rebless [35] N. A. Bhagat, N. Yozbatiran, J. L. Sullivan, R. Paranjape, C. Losey, Z. Hernandez, Z. Keser, R. Grossman, G. Francisco, M. K. O’Malley et al., “A clinical trial to study changes in neural activity and motor recovery following brain-machine interface enabled robot-assisted stroke rehabilitation,” medRxiv, 2020. [36] N. A. Bhagat, A. Venkatakrishnan, B. Abibullaev, E. J. Artz, N. Yozbatiran, A. A. Blank, J. French, C. Karmonik, R. G. Grossman, M. K. O’Malley et al., “Design and optimization of an eeg- based brain machine interface (bmi) to an upper-limb exoskeleton for stroke survivors,” Frontiers in neuroscience, vol.10, p.122, 2016. [37] J. Gonzalez-Espana, A. Craik, C. Ramirez, A. Alamir, and J. L. Contreras-Vidal, “Optimization of electrode configuration for the removal of eye artifacts with adaptive noise cancellation,” in 2023 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2023. [38] J. Gonzalez-Espana, A. Craik, A. Alamir, J. Feng, and J. L. Contreras-Vidal, “Neuroexo: A low cost non invasive brain computer interface for upper-limb stroke neurorehabilitation at home,” in Proceedings of the 10th International Brain-Computer Interface Meeting 2023. Graz University of Technology Publishing House, 2023, p.213, iSBN: 978-3-85125-962-9. [39] W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks,” EEE Transactions on Autonomous Mental Development, vol.7, pp.162–175, 2015. [40] P. Kamalinejad, C. Mahapatra, Z. Sheng, S. Mirabbasi, V. C. Leung, and Y. L. Guan, “Wireless energy harvesting for the internet of things,” IEEE Communications Magazine, vol.53, no.6, pp.102–108, 2015. [41] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,” IEEE internet of things journal, vol.3, no.5, pp.637–646, 2016.
Docket No.2957092-000002-WO2 Filed December 23, 2024 [42] M. Mohammadi, A. Al-Fuqaha, S. Sorour, and M. Guizani, “Deep learning for iot big data and streaming analytics: A survey,” IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp.2923–2960, 2018. [43] Y. Lu and L. Da Xu, “Internet of things (iot) cybersecurity research: A review of current research topics,” IEEE Internet of Things Journal, vol.6, no.2, pp.2103–2115, 2018. Example 3 [1] Song, Zhenjie. "Facial expression emotion recognition model integrating philosophy and machine learning theory." Frontiers in Psychology 12 (2021): 759485. [2] Čmejrková, Světla. "Emotions in language and communication." Emotion in Dialogic Interaction. Amsterdam: John Benjamins (2004): 33-53. [3] Rafaeli, Anat, and Robert I. Sutton. "Emotional contrast strategies as means of social influence: Lessons from criminal interrogators and bill collectors." Academy of management journal 34.4 (1991): 749-775. [4] Morris, Michael W., and Dacher Keltner. "How emotions work: The social functions of emotional expression in negotiations." Research in organizational behavior 22 (2000): 1-50. [5] Salovey, Peter, and John D. Mayer. "Emotional intelligence." Imagination, cognition and personality 9.3 (1990): 185-211. [6] Marechal, Catherine, et al. "Survey on AI-Based Multimodal Methods for Emotion Detection." High-performance modelling and simulation for big data applications 11400 (2019): 307-324. [7] Nguyen, Thinh, et al. "The cortical network of emotion regulation: Insights from advanced EEG-fMRI integration analysis." IEEE transactions on medical imaging 38.10 (2019): 2423- 2433. [8] Alarcao, Soraia M., and Manuel J. Fonseca. "Emotions recognition using EEG signals: A survey." IEEE Transactions on Affective Computing 10.3 (2017): 374-393. [9] Niemic, Christopher. "Studies of emotion: a theoretical and empirical review of psychophysiological studies of emotion." (2004). [10] Vempati, Raveendrababu, and Lakhan Dev Sharma. "A Systematic Review on Automated Human Emotion Recognition using Electroencephalogram Signals and Artificial Intelligence." Results in Engineering (2023): 101027.
Docket No.2957092-000002-WO2 Filed December 23, 2024 [11] Zheng, Wei-Long, and Bao-Liang Lu. "Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks." IEEE Transactions on autonomous mental development 7.3 (2015): 162-175. [12] Koelstra, Sander, et al. "Deap: A database for emotion analysis; using physiological signals." IEEE transactions on affective computing 3.1 (2011): 18-31. Aldawsari, H., Al-Ahmadi, S., & Muhammad, F. (2023). Optimizing 1D-CNN-Based Emotion Recognition Process through Channel and Feature Selection from EEG Signals. Diagnostics, 13(16), 2624. https://doi.org/10.3390/diagnostics13162624 Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124–129. https://doi.org/10.1037/h0030377 Li, J., Li, S., Pan, J., & Wang, F. (2021). Cross-subject EEG emotion recognition with self- organized graph neural network. Frontiers in Neuroscience, 15, 611653. Musha, T., Terasaki, Y., Haque, H. A., & Ivamitsky, G. A. (1997). Feature extraction from EEGs associated with emotions. Artificial Life and Robotics, 1(1), 15-19. Ozdemir, M. A., Degirmenci, M., Izci, E., & Akan, A. (2021). EEG-based emotion recognition with deep convolutional neural networks. Biomedical Engineering/Biomedizinische Technik, 66(1), 43-57. Ouyang, D., Yuan, Y., Li, G., & Guo, Z. (2022). The effect of time window length on EEG- based emotion recognition. Sensors, 22(13), 4939. Qing, C., Qiao, R., Xu, X., & Cheng, Y. (2019). Interpretable emotion recognition using EEG signals. IEEE Access, 7, 94160-94170. Salama, E. S., El-Khoribi, R. A., Shoman, M. E., & Shalaby, M. A. W. (2021). A 3D- convolutional neural network framework with ensemble learning techniques for multi-modal emotion recognition. Egyptian Informatics Journal, 22(2), 167-176.
Docket No.2957092-000002-WO2 Filed December 23, 2024 Sun, J., Wang, X., Zhao, K., Hao, S., & Wang, T. (2022). Multi-channel EEG emotion recognition based on parallel transformer and 3D-convolutional neural network. Mathematics, 10(17), 3131. Yu, X., Li, Z., Zang, Z., & Liu, Y. (2023). Real-Time EEG-Based Emotion Recognition. Sensors, 23(18), 7853. https://doi.org/10.3390/s23187853 Zheng, W., & Lu, B. (2019). SEED-IV: A large-scale EEG dataset for emotion recognition. Shanghai Jiao Tong University. Zheng, W., & Lu, B. (2021). SEED-V: A large-scale EEG dataset for emotion recognition. Shanghai Jiao Tong University. Example 4 [1] S. Clay, “Teens feeling stressed, and many not managing it well,” Monitor on Psychology¸vol.45, no.4, pp.20, Apr.2014. [Online]. Available: https://www.apa.org/monitor/2014/04/teen-stress.