[go: up one dir, main page]

WO2025111381A1 - Procédés et appareils d'interfaçage d'ordinateur humain sécurisé en temps réel à l'aide d'une opticomyographie dense - Google Patents

Procédés et appareils d'interfaçage d'ordinateur humain sécurisé en temps réel à l'aide d'une opticomyographie dense Download PDF

Info

Publication number
WO2025111381A1
WO2025111381A1 PCT/US2024/056750 US2024056750W WO2025111381A1 WO 2025111381 A1 WO2025111381 A1 WO 2025111381A1 US 2024056750 W US2024056750 W US 2024056750W WO 2025111381 A1 WO2025111381 A1 WO 2025111381A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
sensors
data
domg
optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/056750
Other languages
English (en)
Inventor
III William Anthony LIBERTI
Jiang Lan FAN
Laurel DUNN
Zuzanna BALEWSKI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Morphosis Inc
Original Assignee
Morphosis Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Morphosis Inc filed Critical Morphosis Inc
Publication of WO2025111381A1 publication Critical patent/WO2025111381A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • Described herein are methods and apparatuses (e.g., systems and devices, including computer software, hardware and/or firmware) configured to enable a wearable human-computer interface to provide a continuous, real-time authentication and control signal for various device interactions.
  • These interfaces use a dense arrangement of non- invasive biometric sensors that leverage a user’s unique anatomy and physiological signals that govern arm and finger movements.
  • this approach can both uniquely identify a user and decode their fine motor movements, revealing ‘who’ a user is, and ‘how’ they are interacting with digital devices in real time. This combination may provide continuous authentication for every action a user has with technology.
  • Described herein are methods and apparatuses for continuous, real-time biometric security and control approaches that may use one or a plurality of opticomyographic sensors to provide both a control signal and a concomitant identity signal to certify the identity and movement of user interactions with a device, machine and/or apparatus.
  • the approaches described herein may non-invasively (i.e. without requiring direct contact, surgical implantation, etc.) detect changes in an optical property signal from tissue, such as one or more of light absorption, reflection, transmission, optical density, etc.
  • optical property signal The signal that arises from the optical property of the tissue, or a change in the optical property of the tissue, may be referred to herein as an “optical property signal.”
  • This optical property signal may be processed to isolate specific signals that may indicate a specific muscle action, innervation and/or motor signal.
  • the optical property signal may be processed to identify tissue structure and properties such as size, orientation, color and/or texture, of skin, scars, tattoos, hair, blood, blood flow and oxygenation, as well as vasculature and musculoskeletal structure/ systems including but not limited to muscles, bones, joints, cartilage, tendons, ligaments, blood vessels, and connective elements. Collectively and/or individually, these tissue elements can serve as unique identifiers of an individual.
  • the same optical property signal can be used to both identify an individual through their unique, idiosyncratic tissue anatomical features, while time-varying features of tissue optical properties can encode a user’s actions and/or intended action(s).
  • the user’s intent may be the users intended actions, and may include a planned movement, imagined movement, small physiological change below over movement threshold.
  • the methods and apparatuses described herein may provide a biomedical imaging approach that uses both static and dynamic optical property signals.
  • Light in visible and near- visible wavelengths (for example: 650 nm, 900 nm, or one or more wavelengths between about 400 nm and about 1700 nm) is directed at the skin and the change tissue optical signal is recorded by a plurality of detectors.
  • Tissue has inhomogeneous spectral properties based on its composition. For example, subcutaneous vasculature (e.g., blood vessels) absorb differently depending on the wavelength, (e.g., visible, or near-IR light), where veins and arteries can have a higher spectrophotometric contrast compared to the surrounding tissue.
  • tissue features such as skin becoming more pronounced.
  • image detectors for example, an imaging array or CMOS optical sensor
  • a light gathering component such as a conventional lens, diffusers or micro lens array
  • tissue structure including skin, vasculature, tendon and muscle
  • a fingerprint there are structural differences between individuals, particularly in vasculature and the micro-features of skin texture that can be used for the identification.
  • voluntary muscle movements cause local tissue distortions (relative to a rigid skeleton), and muscle activity can change in both blood chemistry and flow, which together cause changes in the optical properties of tissue that is detected by optical approaches such as spectrophotometry. Additional changes in tissue optical properties occur due to involuntary changes in blood flow, including cardiac physiology.
  • Both voluntary and involuntary signals- generated in some way by muscle movements- can be used to infer both the specific movements and physiological states of an individual. These signals can be used to both certify the identity of the user, and decode their intentions/actions for continuous, real-time control of technology such as a phone, personal computer, drone, software application, secure system and the like.
  • a piece of technology that may be authenticated or controlled via this method is referred to henceforth as a ‘downstream device.’
  • Described herein are methods and apparatuses that may use optical approaches to non-invasively, rapidly and accurately determine an individual’s identity and intention through measurement of tissue movements and/or gross tissue structure, in an approach hereby referred to as dense opticomyography or dOMG.
  • both the structure and movement-based changes in tissue optical properties can be used to create what we herein refer to as dense opticomyography (dOMG) data.
  • This data may be taken from one or multiple sensors, that may be 1 -dimensional, 2-dimensional, 3 -dimensional, multi-spectral, etc., that are either static or a time series.
  • Acquired images are used to create or compare models that may be used to identify individuals, decode identity, intended/unintended movements, and the intent to move or act, which can then be used to interact with technology.
  • FIG. 1 shows a schematic drawing of the wearable apparatus described herein for collection and synthesis of biometric data from a dense array of optical sensors.
  • FIG. 2 schematically illustrates how the wearable interface described herein interfaces with a downstream device (e.g., for control purposes) and remote computing resources (e.g., for data processing).
  • FIG. 3 schematically illustrates sample data collected by placing the device on a wrist and shows dOMG data from different wavelengths of illumination light, highlighting anatomical features present.
  • FIG. 4 illustrates an example procedure for handling of data, including processing/modification of raw data and interpretation of data, for example, using machine learning (ML) techniques, to construct signals relevant for downstream applications (e.g., identification and/or controls).
  • ML machine learning
  • FIG. 5 shows the performance of an illustrative machine learning algorithm used for determining the identification of the wearer, comparing the true identity of the wearer against the identity determined (i.e., estimated) by the model.
  • FIG. 6 illustrates the experimental setup for movement decoding using dOMG interface for a user with one impaired and one non-impaired hand, including an external tracking camera.
  • FIG. 7 provides a schematic diagram showing methods of movement decoding using dOMG interface for a user with one impaired and one non-impaired hand.
  • FIG. 8 illustrates procedures for application of the dOMG interface for enrollment and continuous identification of a user for granting secure access to a password management system.
  • FIG. 9 illustrates procedures for application of dOMG device data to interfacing with downstream devices, while providing continuous identity verification.
  • the dense opticomyography (dOMG) methods and apparatuses described herein may include wearable interfaces for continuous real time authentication and control of technology based on tissue structure and/or movement. These methods and apparatuses may relay identification and/or control signals specific to an individual to a downstream device, which may be configured to either be controlled by the relayed signal(s) or may be configured to process the dOMG data signals for controlling one or more apparatuses, and/or recording, transmitting or analyzing the information.
  • dOMG dense opticomyography
  • an apparatus as described herein may include: a plurality of optical sensors configured to sense an optical property, wherein the plurality of optical sensors comprises at least one light emitter and a plurality of optical detectors; a support configured to hold the plurality of optical detectors of the spectrophotometric sensors adjacent to a skin surface so that the plurality of optical detectors are arranged in a pattern relative to the skin surface; a processor configured to receive signals from a plurality of optical detectors.
  • These various dOMG data streams are used to isolate anatomical signatures related to skin, neuromuscular, vascular, and other tissues (e.g., moles, pores, hairs, etc.), as well as to track physiological changes in anatomy that may occur over time related to phenomena such as movement, aging, etc. Separation of anatomical features and/or changes in such features over time may be performed optically (e.g., adjusting the illumination, optical sensors, and/or other hardware/imaging parameters) and/or using statistical/machine learning techniques (e.g., signal processing, dimensionality reduction, neural networks, etc.).
  • optically e.g., adjusting the illumination, optical sensors, and/or other hardware/imaging parameters
  • statistical/machine learning techniques e.g., signal processing, dimensionality reduction, neural networks, etc.
  • Static and/or time series data including various types of raw signal data, reconstructed image data, and/or transformations of the data (e.g., obtained by passing the data through statistical and/or machine learning models), may be used to train one or a plurality of machine learning algorithms to enable one or a plurality of downstream applications (e.g., user identification, decoding of user movements, control of a mechanical device, etc.).
  • downstream applications e.g., user identification, decoding of user movements, control of a mechanical device, etc.
  • FIG. 1 illustrates an example of an apparatus as described herein.
  • the apparatus 100 includes a support 101 configured as a strap or band that may be secured over a subject’s arm, forearm, wrist, etc.
  • the support 101 supports a plurality of dOMG sensor sets, which include optical emitters and detectors 105, along the internal (skin-facing) side of the strap.
  • the sensor sets may be integrated into the support or may be coupled to the support.
  • the apparatus 100 also includes a processor 107 including or coupled to an output 109.
  • the processor and/or output may be within a housing attached to the support.
  • the optical property signal may be measured by the system (e.g., by the optical sensor set).
  • an optical element 103 such as a lens or lenses, a diffuser or micro lens array may be placed between the skin and the sensor.
  • the optical elements may be custom designed and constructed, through additive manufacturing and the like.
  • FIG. 2 schematically illustrates one example of a dOMG interface, or sub-system, as described herein, where the optical system may be used to interface with a downstream device through continuous real time authentication and control.
  • the dOMG interface system 201 includes one or more dOMG sensor sets, having both an emitter and a sensor (e.g., a pair including an optical emitter and optical sensor or a combined emitter/sensor) providing input to the processor.
  • the wearable interface 201 may be configured as a strap, band, garment, brace, patch, etc. that is configured to be worn adjacent to or against the subject’s skin.
  • the system also includes one or more optical sensor set(s) 203 as described herein, in addition to other sensors (e.g.
  • the processing circuitry 207 and/or the processor 209 may include modules for processing the signal (e.g. to filter, amplify) and to decode intended actions and/or a user’s identity.
  • the processing circuitry 207, 209 may communicate with another (secondary) processor 211 that may store, transmit (e.g., to a remote or local server 213) or process data from the sensors.
  • the processing circuitry may include a flexible printed circuit board (PCB), or microwire interconnect, and a microprocessor that provides power and common ground to the sensor(s). A microprocessor may also provide signal processing.
  • the processing may be done directly by the processor 209 (e.g., microprocessor, computer etc.), without the need for intermediate circuitry.
  • Data recorded from the sensor(s) may be streamed to a processor and to other devices 211 for authentication and/or closed loop control.
  • the optical sensor set may be configured to detect an optical property from the tissue, and may include, for example, one or more light emitters and one or more optical detectors.
  • the light emitter may be any appropriate light emitter, such as (but not limited to) an LED, a laser or the like.
  • the light emitter may emit a single wavelength or color, a range of wavelengths, or a plurality of different discrete wavelengths (or discrete bands of wavelengths).
  • the light emitter may comprise an LED configured to emit red light.
  • the light emitter may be configured to emit light in the infrared (IR, including the near-infrared) spectrum, such as between about 700 nm and about 800 nm. In some examples the light emitter may be configured to emit light between about 600 nm and about 990 nm. In some examples the light may be emitted continuously or in a pulsed manner (e.g., at a frequency of between about 5 Hz and 1000 Hz, greater than 10 Hz, greater than 100 Hz, etc.). The light emitter may comprise multiple light emitters configured to emit at two or more different wavelengths or ranges of wavelength.
  • IR infrared
  • the light emitter may be configured to emit light between about 600 nm and about 990 nm.
  • the light may be emitted continuously or in a pulsed manner (e.g., at a frequency of between about 5 Hz and 1000 Hz, greater than 10 Hz, greater than 100 Hz, etc.).
  • the light emitter may comprise multiple light emitters configured to emit
  • the light emitter(s) may comprise both the light source and the media that it passes through, e.g., air, polymers, plastics, glass, gels, and/or some combination thereof.
  • the light emitter(s) may emit directional light, at incident angles ranging from 0 (parallel) to 90 degrees (perpendicular) relative to the imaging plane or some combination thereof.
  • the optical detector may be any appropriate optical detector, such as (but not limited to) a photodetector/photosensor, e.g., photodiodes, charge-coupled devices (CCDs), phototransistors, quantum dot photoconductors, photovoltaics, photochemical receptors, neuromorphic imagers, etc.
  • the optical sensor set may be integrated, so that one or more light emitters are paired with one or more light sensors.
  • the optical sensor set may include a single light emitter or a pair of light emitters with a plurality of light sensors.
  • the one or more light emitters may be separate from the one or more light sensors.
  • the one or more light emitters and one or more light sensors may be arranged and/or secured to the support, so that the light emitter(s) and light sensor(s) of the optical sensor set are arranged adjacent to each other and/or are arranged opposite each other, so that light from the light emitter(s) may first travel through the tissue before light is sensed by the light sensor(s).
  • the methods and apparatuses described herein may generally detect an optical property from the tissue that may be correlated with muscle movement.
  • the optical property may be tissue absorption, emission and/or reflection of light, as will be described herein.
  • FIG. 3 shows an example of tissue that may be examined via this approach. Shown here is a schematic of the left forearm and hand 301 of a user. The inset shows a few non-exhaustive examples of biological features that contribute to the optical property signal of tissue, observed with different wavelengths (green 303, and red 305) of light, including skin texture 307, tendons 309, skin ridges 311, blood vessels 313, hair 315 etc. The timevarying distortion of these anatomical features can be used as a control signal that is unique to a specific individual.
  • the methods or apparatuses described herein may include one or more processors, such as microprocessors and/or additional circuitry.
  • the processor may include instructions to perform any of the methods described herein.
  • the processor may be configured to isolate the component of the optical property signal corresponding to the heartbeat, voluntary or involuntary muscle movement, or any or all tissue properties such as size, orientation, color and/or texture of skin, hair, blood, vasculature and musculoskeletal structure/ systems including but not limited to muscles, bones, joints, cartilage, tendons ligaments and connective tissue.
  • the optical property may correspond to the differential optical property signal of light at two (or more) wavelengths.
  • This processor may be configured to compare any detected optical property signal to previously acquired optical property signals in order to determine the probability that these signals came from the same individual.
  • This processor may therefore use statistical and/or machine learning (ML) approaches such as an artificial neural network (e.g. a convolutional neural network).
  • ML machine learning
  • optical sensor sets and processors may, in concert, perform methods to remove, isolate or augment high and/or low-spatial frequency information, ambient light, noise and/or artifacts from the images.
  • optical sensor set(s) is/are secured to a support.
  • the apparatus may include a plurality of optical sensor sets secured by the support.
  • the support may be any structure configured to hold the optical sensor set(s) adjacent to the tissue from which the signal will be non-invasively measured.
  • the support may be coupled to some or all light emitters, optical components, optical detectors, sensors, microprocessors, and electrical circuit components used in the full functionality device.
  • the support may be a flexible wrist band, made from one or more bendable, form-fitting, and/or stretchy materials, i.e. fabrics, elastics, plastics, silicones, bendable metals. Part of the support may be made from rigid material, i.e.
  • the support may be an accessory to an existing wrist wearable, such as a watch, smartwatch, fitness tracker, or jewelry item, and take on the form of an existing component of said existing wearable, i.e. a watch band, or be an auxiliary component.
  • the support may include additional flexible material, such as foam, plastic, silicone, at its interface with the skin to assist in blocking ambient light. Any of these structures or supports may be fabricated using additive manufacturing (e.g. 3D printing), injection molding, or similar fabrication approaches.
  • the support may be or may include a garment, wristwatch or other wearable structure on the user’s arm (e.g. forearm or wrist).
  • the support may be or may include a strap, band, or patch.
  • the support may be configured to be secure to the body (and in some cases removably secure to the body).
  • the support is configured to fit on a user’s arm (e.g., forearm, shoulder, upper arm, wrist, elbow, hand and/or ringers, etc.).
  • the optical sensor set(s) may be coupled to the support.
  • the optical sensor set(s) may be rigidly coupled to the support and or flexibly coupled to the support.
  • the support may hold the processor (e.g., a controller, control circuitry, microprocessor, electronic communication circuitry, memory, etc.) and/or a power source (e.g., battery, capacitive power source, regenerative power source, etc.), and/or connections (e.g., wires, traces, etc.), etc.
  • the support may hold other, non-optical sensors (e.g., IMU, GPS, temperature, force/pressure, fingerprint), and/or interfacing components (e.g., mechanical or capacitive buttons, dials, and sliders), and/or communicative components (e.g., microphone, speaker, haptic motor).
  • the support may include one or more housings enclosing all or part of the aforementioned components.
  • any of the apparatuses (devices, systems, etc.) described herein may also include one or more signal conditioners configured to modify (e.g., condition) the signal of or from the optical sensor set.
  • the signal conditioner may include one or more of: a lens, a diffuser, a filter, and a lens array.
  • the signal conditioner may be part of the optical sensor set or separated from the optical sensor set.
  • the signal conditioner may be coupled to the support and/or at least partially enclosed within the housing(s). In some examples the conditioning may also be performed in part by the processor(s).
  • the processor may be configured to distinguish tissue movement patterns or static features of tissue as specific to an individual.
  • the processor may be configured to isolate an optical signal corresponding to a heartbeat, breathing rate or any other regular, periodic signal from the received optical property signals by taking the signal that is common to the plurality of optical detectors.
  • a processor includes hardware that runs the computer program code.
  • the term ‘processor’ may include (or may be part of) a controller and may encompass not only computers having different architectures such as single or multi-processor architectures and sequential (Von Neumann) or parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other devices.
  • FPGA field-programmable gate arrays
  • ASIC application specific circuits
  • the processor may include one of more of the following computing elements: one or more microprocessors which are integrated as part of the apparatus (i.e., coupled to the support), a computer, phone, or computing device connected to the device (e.g., via Bluetooth, Wi-Fi, USB, RF or other similar means), a computing resource (i.e., a remote server or virtual machine).
  • These processor(s) may work independently and/or in conjunction with one another to perform operations related to synthesis, storage, interpretation and application of the data. For example, the transformation of raw data and application of ML models to yield a continuous output signal that reports the identity and/or intention of the wearer.
  • a supervised ML model produces concurrent authentication and control signals that are passed to a downstream device (e.g., personal computer, phone, remotely controlled electronic or mechanical device, etc.).
  • FIG. 4 Illustrates an example of a process for handling dOMG and other sensor data.
  • raw (unprocessed) dOMG and other sensor output 401 may be modified during a preprocessing stage, for example, for amplification, filtering, signal subtraction (e.g., of heartbeat), digitizing, etc. 403.
  • Preprocessing may be integrated with the dOMG sensor set itself, or it may be separate.
  • preprocessing may be integrated with the processor or it may be coupled to, but distinct from the processor.
  • preprocessing may include registering the optical sensors (and/or registering the dOMG representation taken by the optical sensors) 405.
  • Processing may be done locally by an onboard processing unit and/or may be sent via a transmitter to be processed remotely.
  • optical data that encodes a user’s anatomy and physiology including identifying position and/or movement of a body region and/or how the optical property signal should be interpreted as a control or identity signal for interfacing with a device, computer and/or software receiving the output indicator of position and/or movement of the body part (or an indicator of this movement), as well as the identity of the user.
  • computations to support synthesis and interpretation of the signal may occur either partially and/or entirely on the wearable device (i.e., on the embedded microprocessor), or partially or entirely remotely (e.g., using a downstream device or remote processor located at a separate remote computing environment).
  • the synthesis of derivative streams used to infer the identity and/or control state of the wearer may occur on an embedded microprocessor, while derivative streams are produced as output from a statistical and/or ML model trained using remote data and compute resources (i.e., on a downstream device, remote server, cloud environment, or the like).
  • the device streams data to a local computer which transfers it to a remote computing environment, thus triggering the creation of a machine learning (ML) model.
  • ML machine learning
  • models may be trained locally on the processor of the dOMG interface without accessing a downstream device or server.
  • the ML model is packaged as software containing a complete set of instructions for translating raw dOMG and sensor data gathered or generated by the device into an output stream suitable for enabling identity and/or control applications.
  • the ML model generated in a remote computing environment would then be transferred to the local computer (e.g., over the internet) and then to the microprocessor itself (e.g., over Bluetooth, wireless, etc.).
  • Output streams synthesized from the data would then be passed (either directly or via an application programming interface or graphical user interface) to software and hardware tools enabling applications including identity verification, authentication, controls, etc.
  • Examples of output streams produced by the device include but are not limited to the following: control signals, keystrokes, mouse movements, the identity or body position of the wearer, certain dOMG signals or signatures, raw and/or processed dOMG or sensor data.
  • Data streaming and synthesis may be done continuously, and/or in real time, or may be stored for later transfer, review, analysis, and use, including to improve the performance of ML models for applications described herein.
  • Processed or raw data may be stored on the dOMG interface, and/or on downstream devices, remote servers and the like.
  • the system may incorporate a selfassessment mechanism to determine either the presence of a wearer and if worn, the specifics of the interaction between the wearer and the downstream device. In one example it may be determined if the device is currently being worn and if the fit is appropriate (i.e. neither too loose nor too tight). Moreover, the system may assess the proximity to a downstream device, and if the device is within a predefined range for communication. If the system detects that it is too far from the downstream device, it can take predetermined actions, such as alerting the user or initiating a security protocol to prevent access in the event of physical separation and/or unauthorized use.
  • assessments may utilize an array of integrated sensors to decode its operational state in real-time, including but not limited to: dOMG sensors, inertial measurement units (IMUs), and wireless modules (e.g., Bluetooth, Wi-Fi, RF, and the like), temperature sensors, pressure sensors, force sensors, position/tracking sensors (GPS and the like) and or capacitive sensors.
  • dOMG sensors inertial measurement units (IMUs)
  • IMUs inertial measurement units
  • wireless modules e.g., Bluetooth, Wi-Fi, RF, and the like
  • temperature sensors e.g., pressure sensors, force sensors, position/tracking sensors (GPS and the like) and or capacitive sensors.
  • GPS position/tracking sensors
  • One or more cameras may be used to track body movements so that they may be coordinated with detected optical signals and signals from other sensors, (e.g. IMU) from the apparatus being worn.
  • the apparatus may be used to train a bodily reconstruction using computer vision with one or more cameras associated with the signal during the calibration phase and/or later.
  • these cameras may be worn on the body, such as a virtual, augmented, or mixed reality (collectively referred to as extended reality, XR) headset or smart glasses.
  • the apparatus may train bodily reconstruction using depth or force sensitive sensors, such as an IR dot matrix or surface area of contact-based measurement of force using a touchscreen or camera and a clear surface instead of or in addition to a camera observational system.
  • the apparatus may train movement/interface decoders based on intended or instructed actions. In some cases, the apparatus may train movement/interface decoders based on user feedback/input via self and/or with instructed reports. In other instances, a camera may use pose estimation of body parts and relate this to the optical property signal. Conventional computer interfaces may also be used to infer body position, including keypresses, computer mouse movement and clicking, touchscreen inputs etc.
  • processing and synthesis of the data to yield requisite insights may use statistical and/or machine learning approaches (e.g., convolutional neural networks, regression, clustering, decomposition).
  • These statistical and/or machine learning models may include supervised approaches, whereby ground truth data regarding user identity and/or actions is available at the time of training, and/or unsupervised approaches, whereby such ground truth information is not available to the model.
  • Data collection to enable supervised learning models may involve prompting the user to perform a variety of bodily movements. In some examples, hand gestures and poses may be performed.
  • the user may interact with an external object, including, but not limited to, force sensors, keyboards, mice, touchscreens, remote-controlled devices, and non-electronic objects.
  • the user may interact with virtual objects in XR.
  • the user may provide biometric information using a separate data-collecting apparatus, such as a smartphone camera, while wearing or without wearing the dOMG interface.
  • the user may be performing movements spontaneously or guided by visual, haptic and/or auditory instruction.
  • the user may perform volitionally, involuntarily, or mechanically assisted (e.g. using a translational stage or robotic exoskeleton).
  • Models may be trained on data generated from one or more sessions, for example using data from one or more users and one or more wearable devices for purposes of producing models that are generalizable across users and/or devices.
  • Outputs pertaining to one or more applications may be derived from a single model or using multiple ML models (e.g., hierarchical models, ensemble models, transfer learning, etc.) combining different data sources, model formulations, or objective functions.
  • data collected from a plurality of users may be used to train a model for extracting features that differentiate individuals present in the training data; the same model may then be adapted (e.g., using statistical methods and/or transfer learning) to produce an output stream certifying if/when the device is being worn and if/when the identity of the wearer matches the identity of the individual to whom the device is registered.
  • FIG. 5 shows the accuracy performance of an ML model identifying which user is wearing the device, shown here as a confusion matrix.
  • the y-axis shows the true identity of the user, while the x-axis shows the identity determined by an ML model.
  • the model demonstrates high accuracy and specificity and precision across all users tested in this example.
  • Data collected from the device, (dOMG and/or sensor data) and/or externally- collected data may undergo methods of dimensionality reduction, such as principal component analysis, singular value decomposition, non-negative matrix factorization, clustering, etc., with the goal of extracting relevant physical and conceptual features, both static and dynamic.
  • ground truth body -tracking data captured by a webcam may be in the form of node positions in 3D space, which can be reduced down, using dimensionality -reducing methods, to a set of stereotyped movements or gestures. Re-applying previously trained decoders
  • a ML or statistical model created previously can be applied over multiple instances of wear. This is done by determining the position and orientation of the system/device on the user’s body relative to when the device was previously worn.
  • positional information may be inferred from biological landmarks such as vasculature, hair, pores, skin ridges and wrinkles, moles, and blemishes, and/or artificial landmarks such as tattoos, stickers, or drawn/placed markings on the skin.
  • position sensor signals collected by an IMU may be used to measure the position of the system relative to the user’s body.
  • Efforts may be taken to make a model robust to variations in the position or placement of the device during wear. This may be done to account for a loosely worn device, or to compensate for the changing physical location of the system relative to body tissue.
  • an optical sensor set or external imaging apparatus with a larger physical scope than the system may be used to collect data from a substantially larger region of the body than that of the wearable apparatus.
  • a model may be created using data combined from various locations of wear.
  • Additional dOMG data may be acquired over subsequent days, weeks and/or years for the purpose of updating statistical and/or machine learning models. Additional dOMG data may be used to create new models or update an existing model in order to increase the accuracy/fidelity of user identification and/or body state inference. This data may include feedback from the user, and/or incorporate patterns of behavior and interactions with downstream devices on an ongoing basis. In some cases, models may exploit spontaneous and/or instructed body positions, movements, and gestures to acquire dOMG data to update or re-create a ML model.
  • the objective of this procedure is to incorporate gradual or sudden anatomical changes that may occur in the wearer such as skin consistency, tone, or blemish, for example due to age, makeup, physical activity, ailments, and/or purposeful or accidental tissue modification (e.g. tattoos, makeup, cuts, scars) that may occur over time.
  • models may be updated to reflect changes, improvements, or modifications in the dOMG apparatus that was used to create ML models (e.g., changes in calibration or physical hardware).
  • Processes for data collection and model updating may use computer and/or storage resources located on the dOMG apparatus (i.e., embedded processors), on a local computer (e.g., wirelessly connected to the device), or on a remote server (e.g., in the cloud) where any previously acquired data may be referenced.
  • a local computer e.g., wirelessly connected to the device
  • a remote server e.g., in the cloud
  • one application is to provide control/identity signals for a user with a mobility impairment.
  • This impairment may be due to any injury or disability that leads to decreased mobility in any form, which includes paraplegia, amputation, deformity, neurological impairment or injury such as ALS, Parkinson's disease or the like.
  • a user may have an impairment on one side of their body, where the movements and body position on the non-impaired side may be used to estimate the intended movements of the impaired side.
  • FIG. 6 illustrates the experimental setup for movement decoding using dOMG apparatuses for a user with one impaired (i.e.
  • FIG. 7 describes the method of movement decoding using dOMG interface 701 for a user with one impaired and one non-impaired hand. The user may be instructed to (attempt to) make symmetrically identical movements with both hands 703 while dOMG signals and hand-tracking information are collected 705.
  • Hand and finger movements from the non-impaired side 707 can therefore be used to infer the relationship between dOMG signals on the impaired side 709 (which, due to the impairment, do not lead to intended movements by the wearer), and the actual intended finger movements.
  • a user may be instructed to attempt or imagine making certain movements or actions, and their dOMG data can be used to estimate their intended movements.
  • a relationship could then be used for any aforementioned control purposes detailed in the following sections.
  • a device armed with the inferred knowledge of a user’s impaired hand and finger positions, may provide feedback that can be used to guide physical therapy.
  • biometric data collected by the device coupled with statistical and machine learning models for synthesizing the data, provide a mechanism for continuously certifying if/when the device is being worn, determining the identity of the wearer, and for decoding the actions they perform. Models that perform any and all of these functions are individualized, thereby reporting not only the actions of the individual, but also the identity of the individual that performed the actions.
  • FIG. 8 illustrates procedures for application of the dOMG apparatus for enrollment and continuous identification of a user for granting secure access to a password management system.
  • New user inputs name and login credentials (e.g., by granting access to password manager) 801 and is guided through a series of prompts designed to generate biometric data 803 for training feature extraction and/or machine learning models to uniquely identify the enrolled user based on their biometric data 805.
  • the enrollment process produces a user identification model artifact containing software instructions for processing biometric data to determine whether (or not) the identity of the wearer matches the identity of the enrolled user, of a previously enrolled user, or if the identity is unknown 807.
  • dOMG apparatus continuously streams raw and processed biometric signal data to a downstream device (e.g., computer, phone, etc.) 809. Streaming data communicates to the downstream device whether (or not) the dOMG apparatus is being worn.
  • a downstream device e.g., computer, phone, etc.
  • the user is logged out 811. If the device is being worn, then the identification model artifact is used to execute software to verify the identity of the wearer 813. If the identity cannot be verified, then access is revoked and unauthorized use may be reported 815. If the identity of the user is verified, then access is granted 817 to a trusted password management system 819.
  • Model outputs used in conjunction with a password management service may provide an alternative to single or multifactor authentication, where dOMG data collected from the wearer may be used to uniquely determine the identity of the wearer (e.g., using statistical and machine learning approaches described above). Continuously providing the identity of the wearer may act as a certificate that assures if the individual seeking access to an account is indeed authorized to have such access (e.g., using password management software to determine which accounts the user may access).
  • data entry to a downstream device is ‘watermarked’ or attributed to a specific individual (e.g., individual keystrokes in an email may be attributed to a specific person or set of persons).
  • This may be used as a means to verify the origin of digital content.
  • more data-intensive models for decoding the actions of the wearer may act as a layer of protection to prevent password theft and unauthorized use. Initially, this may act by automatically logging out of a service or reporting unauthorized use when the identity of the wearer changes or when the authenticity of a request cannot be verified. Eventually, the realtime authentication reported by the device may protect any and all interactions within secure contexts.
  • the methods and apparatuses described herein may provide a mechanism for certifying both the identity of the wearer, while simultaneously decoding motor states (e.g., hand/wrist/finger movements/positions) using supervised and/or unsupervised learning to control downstream devices (e.g., computers, electronic devices, mechanical devices, etc.).
  • motor states e.g., hand/wrist/finger movements/positions
  • control downstream devices e.g., computers, electronic devices, mechanical devices, etc.
  • These concurrent signals may be used to continuously certify the identity of the wearer issuing certain control actions, to verify that a user is authorized to perform certain actions, and to maintain a continuous record of the identity of the individual responsible for requesting or issuing certain actions.
  • FIG. 9 illustrates one such procedure for applying dOMG device data to interface with downstream devices, while providing continuous identity verification.
  • the user is prompted to perform a series of specific actions (e.g., movements and/or controls) 901, while wearing the device; dOMG data is recorded while these actions are performed 903.
  • Data generated from 901 and 903 are then used to train one or more models for user identification and decoding 905.
  • dOMG data generated during use is then used to continuously verify the identity of the wearer and take intervening measures if the identity of the user is unknown or incorrect (e.g., reporting unauthorized use) 907.
  • the decoding model is applied to dOMG data to infer user actions/intent 909 and control signals are sent to downstream device(s) 911.
  • Decoding and/or identification model(s) may be periodically updated using data generated while the device is in use 913.
  • the system and associated methods described here may be applied to the control of, and interactions with, systems operating on devices, machines, computers and/or computerized devices.
  • body state inference techniques described here may be used to interface with a computer, where the user mimics the physical movements normally associated with conventional input devices (i.e., mouse, keyboard, trackpad, touchscreen, etc.) in the absence of these physical devices, and these movements are converted to computer input.
  • the system may use body state inference techniques described above and map body states not typically used for computer control onto computer inputs, for example flexing or straightening finger(s) to move a cursor, pressing a thumb and finger together with gradient force to select from a drop-down menu, snapping the fingers to change a slide in a slideshow, etc.
  • the system may use body state inference techniques described above and map body states onto computer inputs, but gradually change the body state to computer input mapping over time, such that a user with repetitive stress disorder can repeatedly perform the same computer input while physically moving their body differently each time.
  • the system may use body state inference techniques described above to determine body positions and/or movements deemed unhealthy or high-risk for injured or healthy users and suggest alternative input mappings for safer, and more intuitive computer control.
  • dOMG Interfaces may be used to interact with any kind of remotely controlled machine or device.
  • Machines may include, but are not limited to, an electronic device such as a television, drone, robot, equipment, vehicle or other mechanical device.
  • Control signals may be inferred from biometric and/or other signals recorded by the device (e.g., IMU measurements), for example, using supervised learning to train statistical and/or ML models, as detailed above.
  • Control signals may be passed to downstream electronic or mechanical devices via one or more modes of communication, including but not limited to issuing direct control signals, sending control signals to a graphical user interface, sending control signals to an application programming interface which then interprets and issues downstream control signals.
  • control signals may be generated concurrently with signals verifying the identity of the wearer, ensuring interfacing is performed by users authorized to perform the requisite controls.
  • the system and associated methods described above may be applied within the context of virtual, augmented, and/or mixed reality technologies.
  • the system may use body state inference techniques described above to provide a virtual representation of the user’s body, hand, and/or fingers, on one or both sides of the body, in a XR context, typically used in the context of interface control.
  • the system may use body position inference techniques described herein, in combination with wearable or external body -tracking apparatuses such as headsets, smart glasses, motion capture equipment, etc., to improve the performance accuracy of said apparatuses in contexts where other apparatuses may fail, for example when tracking is occluded by another body part or the environment, or when information unable to be picked up by other body -tracking methods, such as pressure applied by the fingers, is needed within an XR context.
  • the system may infer the identity of a user of a XR apparatus, such as a headset or smart glasses, so that only authorized personnel may engage in exclusive XR environments and/or experiences, for example a private meeting room, or to engage custom settings preferred by the wearer.
  • any of the methods (including user interfaces) described herein may be implemented as software, hardware or firmware, and may be described as a non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor (e.g., computer, tablet, smartphone, etc.), that when executed by the processor causes the processor to perform any of the steps, including but not limited to: displaying, communicating with the user, analyzing, modifying parameters (including timing, frequency, intensity, etc.), determining, alerting, or the like.
  • any of the methods described herein may be performed, at least in part, by an apparatus including one or more processors having a memory storing a non-transitory computer-readable storage medium storing a set of instructions for the processes(s) of the method.
  • computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein.
  • these computing device(s) may each comprise at least one memory device and at least one physical processor.
  • memory or “memory device,” as used herein, generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions.
  • a memory device may store, load, and/or maintain one or more of the modules described herein.
  • Examples of memory devices comprise, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
  • processor or “physical processor,” as used herein, generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions.
  • a physical processor may access and/or modify one or more modules stored in the above-described memory device.
  • Examples of physical processors comprise, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Graphics Processing Units (GPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
  • the method steps described and/or illustrated herein may represent portions of a single application.
  • one or more of these steps may represent or correspond to one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks, such as the method step.
  • one or more of the devices described herein may transform data, physical devices, and/or representations of physical devices from one form to another. Additionally, or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form of computing device to another form of computing device by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
  • computer-readable medium generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions.
  • Examples of computer-readable media comprise, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
  • transmission-type media such as carrier waves
  • non-transitory-type media such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media),
  • the processor as described herein can be configured to perform one or more steps of any method disclosed herein. Alternatively, or in combination, the processor can be configured to combine one or more steps of one or more methods as disclosed herein.
  • the device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
  • the terms “upwardly”, “downwardly”, “vertical”, “horizontal” and the like are used herein for the purpose of explanation only unless specifically indicated otherwise.
  • first and second may be used herein to describe various features/elements (including steps), these features/elements should not be limited by these terms, unless the context indicates otherwise. These terms may be used to distinguish one feature/element from another feature/element. Thus, a first feature/element discussed below could be termed a second feature/element, and similarly, a second feature/element discussed below could be termed a first feature/element without departing from the teachings of the present invention.
  • any of the apparatuses and methods described herein should be understood to be inclusive, but all or a subset of the components and/or steps may alternatively be exclusive and may be expressed as “consisting of’ or alternatively “consisting essentially of’ the various components, steps, sub-components or sub-steps.
  • all numbers may be read as if prefaced by the word "about” or “approximately,” even if the term does not expressly appear.
  • a numeric value may have a value that is +/- 0.1% of the stated value (or range of values), +/- 1% of the stated value (or range of values), +/- 2% of the stated value (or range of values), +/- 5% of the stated value (or range of values), +/- 10% of the stated value (or range of values), etc.
  • Any numerical values given herein should also be understood to include about or approximately that value, unless the context indicates otherwise. For example, if the value " 10" is disclosed, then “about 10" is also disclosed.
  • any numerical range recited herein is intended to include all sub-ranges subsumed therein. It is also understood that when a value is disclosed that “less than or equal to” the value, “greater than or equal to the value” and possible ranges between values are also disclosed, as appropriately understood by the skilled artisan. For example, if the value "X” is disclosed the “less than or equal to X” as well as “greater than or equal to X” (e.g., where X is a numerical value) is also disclosed. It is also understood that throughout the application, data is provided in a number of different formats, and that this data, represents endpoints and starting points, and ranges for any combination of the data points.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Des procédés et des appareils d'opticomyographie dense (dOMG) peuvent comprendre des interfaces portables pour une authentification et une commande en temps réel continues de technologie sur la base d'une structure et/ou d'un mouvement de tissu. Les procédés et les appareils décrits ici peuvent relayer des signaux d'identification et/ou de commande spécifiques d'un individu à un dispositif aval, lesquels peuvent être conçus pour être commandés par le ou les signaux relayés ou peuvent être conçus pour traiter les signaux de données dOMG pour commander un ou plusieurs appareils, et/ou enregistrer, transmettre ou analyser les informations.
PCT/US2024/056750 2023-11-20 2024-11-20 Procédés et appareils d'interfaçage d'ordinateur humain sécurisé en temps réel à l'aide d'une opticomyographie dense Pending WO2025111381A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363601139P 2023-11-20 2023-11-20
US63/601,139 2023-11-20

Publications (1)

Publication Number Publication Date
WO2025111381A1 true WO2025111381A1 (fr) 2025-05-30

Family

ID=95827307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/056750 Pending WO2025111381A1 (fr) 2023-11-20 2024-11-20 Procédés et appareils d'interfaçage d'ordinateur humain sécurisé en temps réel à l'aide d'une opticomyographie dense

Country Status (1)

Country Link
WO (1) WO2025111381A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150212576A1 (en) * 2014-01-28 2015-07-30 Anthony J. Ambrus Radial selection by vestibulo-ocular reflex fixation
US20160274660A1 (en) * 2014-05-09 2016-09-22 Eyefluence, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US20170102775A1 (en) * 2015-10-08 2017-04-13 Oculus Vr, Llc Optical hand tracking in virtual reality systems
US20220405946A1 (en) * 2021-06-18 2022-12-22 Facebook Technologies, Llc Inferring user pose using optical data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150212576A1 (en) * 2014-01-28 2015-07-30 Anthony J. Ambrus Radial selection by vestibulo-ocular reflex fixation
US20160274660A1 (en) * 2014-05-09 2016-09-22 Eyefluence, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US20170102775A1 (en) * 2015-10-08 2017-04-13 Oculus Vr, Llc Optical hand tracking in virtual reality systems
US20220405946A1 (en) * 2021-06-18 2022-12-22 Facebook Technologies, Llc Inferring user pose using optical data

Similar Documents

Publication Publication Date Title
US12086304B2 (en) Monitoring a user of a head-wearable electronic device
US12147514B2 (en) Method and system for providing a brain computer interface
US10970374B2 (en) User identification and authentication with neuromuscular signatures
CN109804331B (zh) 检测和使用身体组织电信号
KR102219911B1 (ko) 신체 내부 조직의 광학적인 검출 및 분석 방법 및 장치
US20230259208A1 (en) Interactive electronic content delivery in coordination with rapid decoding of brain activity
WO2022047272A9 (fr) Dispositifs électroniques avec un modèle d'intelligence artificielle statique pour des situations contextuelles, comprenant un blocage de l'âge pour le démarrage de vapotage et d'allumage, à l'aide d'une analyse de données et procédés de fonctionnement associés
Choi et al. Earppg: Securing your identity with your ears
US20230328417A1 (en) Secure identification methods and systems
Goudiaby et al. Eeg biometrics for person verification
Abdulbaqi et al. Spoof attacks detection based on authentication of multimodal biometrics face-ECG signals
Qiu et al. Feasibility of wrist-worn, cancelable, real-time biometric authentication via hd-semg and dynamic gestures
WO2025111381A1 (fr) Procédés et appareils d'interfaçage d'ordinateur humain sécurisé en temps réel à l'aide d'une opticomyographie dense
Dabas et al. A step closer to becoming symbiotic with AI through EEG: A review of recent BCI technology
Hwang User recognition system based on PPG signal
WO2025219294A1 (fr) Système et procédé d'authentification biométrique neurophysiologique adaptable
Mishra et al. DeepV-Net: A Deep Learning Technique for Multimodal Biometric Authentication Using EEG Signals and Handwritten Signatures
Said Machine learning based wearable multi-channel electromyography: application to bionics and biometrics
Park Authentication with Bioelectrical Signals
Shah et al. Symptom-Based Real-Time Augmented Pattern Detection and Recognition for Healthy Living
Shrestha Home Automation Enhancement and Music Player Control with EEG-Based Headset
HK40007325B (zh) 检测和使用身体组织电信号
HK40007325A (en) Detecting and using body tissue electrical signals

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24895007

Country of ref document: EP

Kind code of ref document: A1