WO2025199137A1 - Geriatric functional assessment system using passive wearabl sensing and deep learning - Google Patents
Geriatric functional assessment system using passive wearabl sensing and deep learningInfo
- Publication number
- WO2025199137A1 WO2025199137A1 PCT/US2025/020422 US2025020422W WO2025199137A1 WO 2025199137 A1 WO2025199137 A1 WO 2025199137A1 US 2025020422 W US2025020422 W US 2025020422W WO 2025199137 A1 WO2025199137 A1 WO 2025199137A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- physical function
- machine learning
- motion
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1116—Determining posture transitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/112—Gait analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/20—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/08—Elderly
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0219—Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
Definitions
- the techniques described herein relate to a geriatric functional assessment system including: a wearable device capable of collecting visual data, motion data, or a combination thereof; a processor for analyzing the data to assess physical functioning.
- the techniques described herein relate to a geriatric functional assessment system, wherein the processor uses a deep neural network to analyze the data.
- the techniques described herein relate to a method for assessing the physical condition of a geriatric patient, the method including: capturing visual data, motion data, or a combination thereof using a wearable device; using a machine learning program in a processor to analyze the data to assess the physical condition.
- FIG. 1A-1D is a series of photographs showing a prototype of a device.
- FIG. 2 illustrates a machine learning engine for training and execution related to a biometric authentication, according to various examples.
- FIG. 3 illustrates generally an example of a block diagram of a machine upon which any one or more of the techniques discussed herein may perform, according to various examples.
- the present disclosures relate to wearable monitoring devices and machine learning technologies and, in some examples, to algorithms and systems to assess physical function in geriatric patients using passive data collection and deep neural network analysis.
- Physical function declines with age, making it increasingly difficult for older adults to live independently. Primary care providers are well-positioned to screen for early signs of functional decline.
- a Geriatric Functional Assessment (GFA) system inclusive of a badge-like wearable worn during visits is described and is designed to collect visual and motion data unobtrusively. Deep neural networks will be used to analyze this data and infer physical function. This system can assess physical function allowing clinicians to implement interventions to promote and maintain healthy aging.
- the described examples provide a geriatric functional assessment system that uses wearable technology and machine learning to monitor physical function in aging populations.
- the system addresses technical challenges in collecting and analyzing movement data during routine clinical visits. Although clinical visits are described herein, it is understood that the system can be used in many other settings including home settings, hospital settings, retirement home settings, work-place settings, or any other setting that the system where the system can be brought by a user.
- the wearable device component comprises multiple integrated sensors housed in a badge-sized casing.
- a Portcnta Vision Shield board, or the like contains a 320x320-pixel grayscale camera sensor, or the like, that captures and processes video data in real-time.
- the device receives power from a 10,000 mAh battery system.
- the power distribution network connects to the processing boards and sensors through integrated power management circuitry.
- the system may alternatively utilize an internal lithium polymer battery configuration to reduce external components.
- the device can demonstrate integration capabilities with clinical systems through wireless data transmission and secure storage protocols.
- the physical form factor enables unobtrusive data collection during natural movement through clinical environments while maintaining proper sensor orientation for accurate measurements.
- Described technology examples of geriatric functional assessment systems provide technical solutions to several technical problems.
- Traditional physical function assessment requires dedicated testing spaces, structured protocols, and trained personnel which limits natural movement analysis.
- the system implements a passive wearable device that combines a 320x320-pixel grayscale camera sensor for continuous visual data capture, a 9-DOF IMU sensor with fusion algorithm for stable three-axis acceleration data, parallel motion and video processors for real-time data collection, and a badge-sized form factor enabling unrestricted movement monitoring in clinical spaces.
- Analyzing simultaneous visual and motion data streams requires significant computational resources and complex data integration.
- the system addresses this through a dualengine architecture that employs a training engine for offline model development using preprocessed input data, a real-time prediction engine operating on the device, feature extraction from multiple data streams including accelerometer, gyroscope, magnetic field, and visual data, and parallel processing capabilities through the Portenta H7 board architecture.
- Health monitoring systems must maintain data privacy while enabling integration with clinical workflows.
- the system can address this through local encryption behind firewalls, secure data transmission to approved institutional servers, local processing rather than cloud- based computing, separation of HIPAA-protected identifiers, and integration capabilities with electronic health records.
- Healthcare monitoring systems must support various hardware configurations and implementation scenarios.
- the system architecture enables operation across multiple hardware platforms including PCs, tablets, and mobile devices, support for various network protocols and physical interfaces, multiple machine learning algorithm options including supervised, unsupervised, and reinforcement learning, and a modular design supporting sensor and processing alternatives.
- FIG. 2 illustrates a machine learning engine for training and execution related to the GFA device according to various examples.
- the machine learning engine may be deployed to execute at a mobile device (e.g., a cell phone) or a computer (e.g., an orchestrator server).
- a system may calculate one or more weightings for criteria based upon one or more machine learning algorithms.
- FIG. 2 shows an example machine learning engine 400 according to some examples of the present disclosure.
- Machine learning engine 400 uses a training engine 402 and a prediction engine 404.
- Training engine 402 uses input data 406, for example after undergoing preprocessing component 408, to determine one or more features 410.
- the one or more features 410 may be used to generate an initial model 412, which may be updated iteratively or with future labeled or unlabeled data (e.g., during reinforcement learning), for example to improve the performance of the prediction engine 404 or the initial model 412.
- An improved model may be redeployed for use.
- the input data 406 may be health related data as described hereinabove.
- current data 414 e.g., information received in an authentication attempt, such as via an API, which may include biometric data, etc.
- preprocessing component 416 and preprocessing component 408 are the same.
- the prediction engine 404 produces feature vector 418 from the preprocessed current data, which is input into the model 420 to generate one or more criteria weightings 422.
- the criteria weightings 422 may be used to output a prediction, as discussed further below.
- the training engine 402 may operate in an offline manner to train the model 420 (e.g., on a server).
- the prediction engine 404 may be designed to operate in an online manner (e.g., in real-time, at a mobile device, on a wearable device, etc.).
- the model 420 may be periodically updated via additional training (e.g., via updated input data 406 or based on labeled or unlabclcd data output in the weightings 422) or based on identified future data, such as by using reinforcement learning to personalize a general model (e.g., the initial model 412) to a particular user.
- Labels for the input data 406 may any suitable clinical term.
- the initial model 412 may be updated using further input data 406 until a satisfactory model 420 is generated.
- the model 420 generation may be stopped according to a specified criteria (e.g., after sufficient input data is used, such as 1,000, 10,000, 100,000 data points, etc.) or when data converges (e.g., similar inputs produce similar outputs).
- the specific machine learning algorithm used for the training engine 402 may be selected from among many different potential supervised or unsupervised machine learning algorithms.
- supervised learning algorithms include artificial neural networks, Bayesian networks, instance-based learning, support vector machines, decision trees (e.g., Iterative Dichotomiser 3, C9.5, Classification and Regression Tree (CART), Chi-squared Automatic Interaction Detector (CHAID), and the like), random forests, linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, and hidden Markov models.
- Examples of unsupervised learning algorithms include expectationmaximization algorithms, vector quantization, and information bottleneck method. Unsupervised models may not have a training engine 402.
- a regression model is used and the model 420 is a vector of coefficients corresponding to a learned importance for each of the features in the vector of features 410, 418.
- a reinforcement learning model may use Q- Learning, a deep Q network, a Monte Carlo technique including policy evaluation and policy improvement, a State-Action-Reward-State-Action (SARSA), a Deep Deterministic Policy Gradient (DDPG), or the like.
- the model 420 may output a prediction, analysis, and/or recommendation related to a patient’ s health.
- FIG. 3 illustrates generally an example of a block diagram of a machine 600 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform in accordance with some examples.
- the machine 600 may operate as a standalone device or may be connected (e.g., networked) to other machines.
- the machine 600 may operate in the capacity of a server machine, a client machine, or both in server-client network environments.
- the machine 600 may act as a peer machine in pccr-to-pccr (P2P) (or other distributed) network environment.
- P2P pccr-to-pccr
- the machine 600 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA personal digital assistant
- STB set-top box
- PDA personal digital assistant
- mobile telephone a web appliance
- network router network router, switch or bridge
- machine any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
- machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
- SaaS software as a service
- Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms.
- Modules are tangible entities (e.g., hardware) capable of performing specified operations when operating.
- a module includes hardware.
- the hardware may be specifically configured to carry out a specific operation (e.g., hardwired).
- the hardware may include configurable execution units (e.g., transistors, circuits, etc.) and a computer readable medium containing instructions, where the instructions configure the execution units to carry out a specific operation when in operation. The configuring may occur under the direction of the executions units or a loading mechanism. Accordingly, the execution units are communicatively coupled to the computer readable medium when the device is operating.
- the execution units may be a member of more than one module.
- the execution units may be configured by a first set of instructions to implement a first module at one point in time and reconfigured by a second set of instructions to implement a second module.
- Machine 600 may include a hardware processor 602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 604 and a static memory 606, some or all of which may communicate with each other via an interlink (e.g., bus) 608.
- the machine 600 may further include a display unit 610, an alphanumeric input device 612 (e.g., a keyboard), and a user interface (UI) navigation device 614 (e.g., a mouse).
- the display unit 610, alphanumeric input device 612 and UI navigation device 614 may be a touch screen display.
- the machine 600 may additionally include a storage device (e.g., drive unit) 616, a signal generation device 618 (e.g., a speaker), a network interface device 620, and one or more sensors 621, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor.
- the machine 600 may include an output controller 628, such as a serial (c.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
- a serial c.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
- USB universal serial bus
- NFC near field
- the storage device 616 may include a machine readable medium 622 that is non- transitory on which is stored one or more sets of data structures or instructions 624 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein.
- the instructions 624 may also reside, completely or at least partially, within the main memory 604, within static memory 606, or within the hardware processor 602 during execution thereof by the machine 600.
- one or any combination of the hardware processor 602, the main memory 604, the static memory 606, or the storage device 616 may constitute machine readable media.
- machine readable medium 622 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 624.
- machine readable medium may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 624.
- machine readable medium may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 600 and that cause the machine 600 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.
- Non-limiting machine-readable medium examples may include solid-state memories, and optical and magnetic media.
- machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- non-volatile memory such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices
- EPROM Electrically Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- flash memory devices e.g., Electrically Erasable Programmable Read-Only Memory (EEPROM)
- EPROM Electrically Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- flash memory devices e.g., Electrically Era
- the instructions 624 may further be transmitted or received over a communications network 626 using a transmission medium via the network interface device 620 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.).
- transfer protocols e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.
- Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others.
- the network interface device 620 may include one or more physical jacks (e.g., Ethernet, coaxial, or phonejacks) or one or more antennas to connect to the communications network 626.
- the network interface device 620 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques.
- SIMO single-input multiple-output
- MIMO multiple-input multiple-output
- MISO multiple-input single-output
- transmission medium shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 600, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
- PCP primary care providers
- PCPs may notice subtle or incremental changes in a patient’s physical health - yet 66% underestimate physical function on inquiry. Objective monitoring of changes can accurately identify a need to intervene and plan for future health needs. This lowers costs, improves clinical efficiency, and facilitates patient-centered decisions. Measuring physical function provides patients and clinicians useful data to implement treatment plans.
- Remote patient monitoring can automate data collection and may: (a) support patient engagement by improving a clinician’s ability to track patients’ status and respond to change; (b) furnish clinicians with ongoing data to alter treatment; and (c) offer patients data for motivation. Integrating RPM into routine care can overcome a need for staff to conduct measures
- Context for assessing function Mobility is a complex task affected by footwear, types of activity, and gait patterns. Evaluating movement, balance, or strength in controlled settings using walkways or force-plated instrumented treadmills often follow structured protocols. Testing requires time, patient and staff effort, and space if performed in clinics. Inferring function from spontaneous movements - moving without restrictions - can overcome limitations by integrating contextual data (walking slowly in crowds). Deep neural networks can include time-aligned motion and visual data using multi-modal and transfer learning to implement effective learning. Clinics are optimal sites to collect contextual data for testing, as home settings have different footprints, logistic barriers, require installation and upkeep, and face privacy concerns. Clinic wearables capturing spontaneous movement can infer function using deep neural networks while maximizing usability However, as previously stated, the system can be used to collect contextual data in any other setting outside of the standard clinical setting described herein..
- Prototype Functionality & Assembly The GFA will continuously monitor motion and video data as users freely move about in common areas of the clinic (e.g., check-in, hallways, waiting room), and interact with staff or surrounding objects. Data will be securely uploaded to a server. Continuous, passive data collection will not affect clinic workflows nor will it rely on location- or structure-dependent ambient sensors (e.g., surveillance cameras).
- Model Design Handcrafted motion features are considered in light of factors such as a histogram of gradient, statistical features, and wavelet energy, and automated motion features from recurrent neural networks.
- Handcrafted visual features such as spatio-temporal interest points, feature points between frames, and automated visual features using pre-trained models on the first-person video datasets such as, EPIC-Kitchens and Stanford-ECM Dataset will be adopted.
- an attention module After feature extraction, both motion and visual features will be inputted into an attention module to output attention weights that reflect their relevance to physical function. These weights will be inputted into an attention pooling component to generate an embedding.
- Three fully connected layers will convert the embedding to walking speed, sit-to-stand repetitions, and postural control.
- a weakly supervised learning technique will be used to locate activities in time, using untrimmed time-series motion and visual features; function values are labels of the per-visit data. Either single- or multi-task learning will be used by separating or combining the loss functions.
- models will be initialized using manual attention labels, including sitting, standing, and moving, annotated by the RA based on surveillance videos. Training using manual labels is a fully supervised learning strategy, which will not be used in the product but critical in initializing and evaluating models.
- Accuracy Analysis Accuracy of GFA inferring physical function will use rootmean-square deviation.
- the inference accuracy (ground truth being objective physical function measures) will be evaluated using precision, recall, area under the receiver operating characteristic and precision-recall curves, and Fl-scorc. If accurate, a visualization technique will be developed to show segments of data and the visit. Visualization will allow a trend analysis using graphs, refining the model, and once finalized, will provide clinically useful data. If the model’s outcome is inaccurate, false positives and negatives will be analyzed, investigate the possible causes of failure, and explore additional steps to improve the utility of the data and the accuracy of the model.
- GFA will allow clinicians to identify patients who would benefit from healthy aging interventions to enhance their independence.
- the GFA system addresses a service gap of using technology to overcome barriers faced in routine care... Healthy aging represents a growing clinical challenge to achieve, and the systems described herein can help to achieve healthy aging..
- Alzheimer’s disease and related dementia heart failure; arthritis (osteoarthritis or rheumatoid); chronic hepatitis; asthma; HIV/AIDS; atrial fibrillation; hyperlipidemia; autism spectrum disorder; hypertension; cancer; ischemic heart disease; chronic kidney disease; osteoporosis; chronic obstructive pulmonary disease; schizophrenia or other psychotic disorders; depression; stroke; or diabetes).
- Study Procedures Study visits occurred at a multispecialty clinic building. All participants received a brief, interactive session on how to use the device. The device was then placed on participants (see details below) and the device was turned “on” while the participant walked around the clinic environment for 15 minutes. At its conclusion, physical function assessments were performed along with an exit interview.
- Frailty Status Each individual’s Phenotypic frailty status was evaluated using Fried’s criteria based on a mixture of subjective and objective measures, aligning with our previous methods (3). Criteria included: (a) gait speed: ⁇ 0.8 m/s; (b) low activity: based on a physical activity compared to peers their age; (c) exhaustion: whether participants felt tired during the day; (d) weakness: measured if tired over the past month; and (e) weight loss of >10% in the past 6 months - as not all participants had weight at 6-months, the weight closest to the 6- month date was used. Participants were categorized as frail if they fulfilled three criteria, prefrail with one or two criteria, and robust if no criteria were fulfilled.
- SVCs Support vector classifiers
- RFs random forests
- DNNs deep neural networks
- the models were trained using features extracted from accelerometer, gyroscope, Euler angle, and magnetic field data, incorporating statistical measures like mean, variance, skewness, and kurtosis across the XYZ axes.
- Data processing involved computing mean values within predefined windows to capture relevant movement patterns, classifying physical function into three categories; improved, stable, or declined. Improvement was characterized by increased gait speed, more sit-to-stand repetitions, and reduced travel distances, while a decline reflected slower gait, fewer transitions, and increased travel distances
- RF performed the best with an accuracy of 0.775, weighted precision of 0.842, weighted recall of 0.775, and weighted Fl-score of 0.794.
- a deep neural network (DNN) analysis followed with an accuracy of 0.713, weighted precision of 0.719, weighted recall of 0.713, and weighted Fl -score of 0.698.
- SVC had the lowest performance with an accuracy of 0.677, weighted precision of 0.634, weighted recall of 0.677, and weighted Fl-score of 0.654.
- RF had the lowest accuracy at 0.565, with a weighted precision of 0.708, weighted recall of 0.565, and a weighted Fl-score of 0.605.
- NN outperformed both models with an accuracy of 0.661, weighted precision of 0.611, weighted recall of 0.661, and weighted Fl-score of 0.633.
- SVC had intermediate performance, achieving an accuracy of 0.654, weighted precision of 0.594, weighted recall of 0.654, and weighted Flscore of 0.621.
- SVC had the highest accuracy at 0.663, with a weighted precision of 0.683, weighted recall of 0.663, and a weighted Fl-score of 0.648.
- NN closely followed with an accuracy of 0.646, weighted precision of 0.666, weighted recall of 0.646, and weighted Fl-score of 0.636.
- RF had the lowest accuracy at 0.585, with a weighted precision of 0.701, weighted recall of 0.584, and a weighted Fl-score of 0.624.
- the usability study demonstrates the feasibility of deploying a geriatric functional assessment system in collecting objective physical function data while concurrently generating a positive user experience among an older adult population within an ambulatory clinic setting.
- a geriatric functional assessment system comprises: a wearable device comprising: a camera sensor; a motion sensor; a processor; and a memory storing instructions that, when executed by the processor, cause the system to: collect visual data from the camera sensor; collect motion data from the motion sensor; and analyze the collected data using a machine learning model to assess physical functioning.
- Aspect 2 The system of aspect 1, wherein the machine learning model comprises a deep neural network.
- Aspect 3 The system of aspect 1, wherein the motion sensor comprises an inertial measurement unit providing accelerometer, gyroscope, and magnetic field data.
- Aspect 4 The system of aspect 1, wherein analyzing the collected data comprises: extracting features from the visual and motion data; inputting the extracted features into the machine learning model; and generating a physical function assessment based on the machine learning model output.
- Aspect 6 The system of aspect 1, further comprising: a secure data transmission module configured to encrypt and transmit the collected data to a protected server.
- Aspect 7 The system of aspect 1, wherein the machine learning model is trained using labeled physical function data from multiple subjects.
- Aspect 8 The system of aspect 1, wherein the system is configured to: track changes in physical function over time; and generate alerts when physical function changes exceed predetermined thresholds.
- Aspect 9 The system of aspect 1, wherein the wearable device is configured to be worn as a badge during clinical visits.
- a method for assessing physical function comprising: collecting visual data and motion data using a wearable device; processing the collected data to extract features; analyzing the extracted features using a machine learning model; and generating a physical function assessment based on the analysis.
- Aspect 1 1 The method of aspect 10, wherein the machine learning model comprises at least one of: a support vector classifier; a random forest classifier; and a deep neural network.
- Aspect 12 The method of aspect 10, further comprising: encrypting the collected data; transmitting the encrypted data to a secure server; and storing the encrypted data in compliance with privacy regulations.
- Aspect 14 The method of aspect 10, further comprising: integrating the physical function assessment with an electronic health record system.
- a system for monitoring physical function comprising: a wearable device configured to collect motion and visual data; and a processor configured to: analyze the collected data using machine learning; generate physical function metrics; and track changes in the metrics over time.
- Aspect 19 The system of aspect 18, wherein the processor is further configured to: encrypt the collected data; transmit the encrypted data to a secure server; and integrate the physical function metrics with electronic health records.
- the machine learning analysis comprises: feature extraction from motion and visual data; classification using multiple machine learning models; and generation of physical function assessments based on model outputs.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Primary Health Care (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Physiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Probability & Statistics with Applications (AREA)
- Algebra (AREA)
Abstract
A system may include a wearable device capable of collecting visual data, motion data, or a combination thereof. A system may include a processor for analyzing the data to assess physical functioning.
Description
GERIATRIC FUNCTIONAL ASSESSMENT SYSTEM USING PASSIVE WEARABLE
SENSING AND DEEP LEARNING
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of priority to U.S. Provisional Patent Application Serial No. 63/566,568 entitled “GERIATRIC FUNCTIONAL ASSESSMENT SYSTEM USING PASSIVE WEARABLE SENSING AND DEEP LEARNING,” filed March 18, 2024, the disclosure of which is incorporated herein in its entirety by reference.
BACKGROUND
[0002] Aging presents various risks to an individual’s health. Effective monitoring of physical changes during aging can help health care providers to determine treatment plans for aging individuals.
STATEMENT OF GOVERNMENT SUPPORT
[0003] This invention was made with government support under AG073104 awarded by the National Institute of Health/National Institute of Aging. The government has certain rights in the invention.
SUMMARY OF THE DISCLOSURE
[0004] In some aspects, the techniques described herein relate to a geriatric functional assessment system including: a wearable device capable of collecting visual data, motion data, or a combination thereof; a processor for analyzing the data to assess physical functioning.
[0005] In some aspects, the techniques described herein relate to a geriatric functional assessment system, wherein the processor uses a deep neural network to analyze the data.
[0006] In some aspects, the techniques described herein relate to a method for assessing the physical condition of a geriatric patient, the method including: capturing visual data, motion data, or a combination thereof using a wearable device; using a machine learning program in a processor to analyze the data to assess the physical condition.
[0007] In some aspects, the techniques described herein relate to a method, wherein the machine learning program is a neural network.
BRIEF DESCRIPTION OF THE FIGURES
[0008] The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
[0009] FIG. 1A-1D is a series of photographs showing a prototype of a device.
[0010] FIG. 2 illustrates a machine learning engine for training and execution related to a biometric authentication, according to various examples.
[0011] FIG. 3 illustrates generally an example of a block diagram of a machine upon which any one or more of the techniques discussed herein may perform, according to various examples.
DETAILED DESCRIPTION
[0012] Reference will now be made in detail to certain embodiments of the disclosed subject matter, examples of which are illustrated in part in the accompanying drawings. While the disclosed subject matter will be described in conjunction with the enumerated claims, it will be understood that the exemplified subject matter is not intended to limit the claims to the disclosed subject matter.
[0013] Throughout this document, values expressed in a range format should be interpreted in a flexible manner to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly recited. For example, a range of “about 0.1% to about 5%” or “about 0.1% to 5%” should be interpreted to include not just about 0.1% to about 5%, but also the individual values (e.g., 1%, 2%, 3%, and 4%) and the sub-ranges (e.g., 0.1% to 0.5%, 1.1% to 2.2%, 3.3% to 4.4%) within the indicated range. The statement “about X to Y” has the same meaning as “about X to about Y,” unless indicated otherwise. Likewise, the statement “about X, Y, or about Z” has the same meaning as “about X, about Y, or about Z,” unless indicated otherwise.
[0014] In this document, the terms “a,” “an,” or “the” are used to include one or more than one unless the context clearly dictates otherwise. The term “or” is used to refer to a nonexclusive “or” unless otherwise indicated. The statement “at least one of A and B” has the same meaning as “A, B, or A and B.” In addition, it is to be understood that the phraseology or terminology employed herein, and not otherwise defined, is for the purpose of description only
and not of limitation. Any use of section headings is intended to aid reading of the document and is not to be interpreted as limiting; information that is relevant to a section heading may occur within or outside of that particular section.
[0015] The present disclosures relate to wearable monitoring devices and machine learning technologies and, in some examples, to algorithms and systems to assess physical function in geriatric patients using passive data collection and deep neural network analysis. [0016] Physical function declines with age, making it increasingly difficult for older adults to live independently. Primary care providers are well-positioned to screen for early signs of functional decline. A Geriatric Functional Assessment (GFA) system inclusive of a badge-like wearable worn during visits is described and is designed to collect visual and motion data unobtrusively. Deep neural networks will be used to analyze this data and infer physical function. This system can assess physical function allowing clinicians to implement interventions to promote and maintain healthy aging.
[0017] Physical function assessment in aging populations presents complex technical challenges. Monitoring and analyzing human movement requires capturing multiple data streams including motion, balance, and visual information. Traditional assessment methods rely on controlled environments and structured protocols that limit natural movement patterns. Machine learning and computer vision technologies have advanced capabilities for processing complex biometric data, but face difficulties in accurately interpreting spontaneous human activities. Wearable monitoring devices generate continuous data streams that require sophisticated processing to extract meaningful health insights. The integration of sensor data with clinical workflows introduces additional technical complexities around data security, transmission protocols, and electronic health record systems.
[0018] The described examples provide a geriatric functional assessment system that uses wearable technology and machine learning to monitor physical function in aging populations. The system addresses technical challenges in collecting and analyzing movement data during routine clinical visits. Although clinical visits are described herein, it is understood that the system can be used in many other settings including home settings, hospital settings, retirement home settings, work-place settings, or any other setting that the system where the system can be brought by a user.
[0019] The wearable device component comprises multiple integrated sensors housed in a badge-sized casing. For example, a Portcnta Vision Shield board, or the like, contains a 320x320-pixel grayscale camera sensor, or the like, that captures and processes video data in real-time. A Portenta H7 board, or the like, can provide parallel motion and video processors for executing machine learning tasks. An Adafruit 9-DOF Absolute Orientation Inertial Measurement Unit sensor, or the like (such as the Arduino IMU shield or MKR Zeros P) [0020] , generates stable three-axis acceleration data through a fusion algorithm.
[0021] The system collects data streams as users move naturally through clinical environments like check-in areas, hallways, and waiting rooms. The motion sensors record accelerometer data, gyroscope data, magnetic field measurements, and Euler angles. The camera module simultaneously captures visual information about movement patterns and interactions. [0022] Data processing occurs through a dual-engine architecture. A training engine operates offline on server infrastructure to develop the initial machine learning models. A prediction engine runs in real-time on the wearable device to generate physical function assessments. The system implements multiple machine learning approaches including support vector classifiers for handling high-dimensional data, random forests for modeling non-linear relationships, and deep neural networks for learning complex movement patterns.
[0023] The analysis pipeline extracts features from both motion and visual data streams. Statistical measures like mean, variance, skewness, and kurtosis are calculated across spatial axes. The system can evaluate multiple aspects of physical function including walking speed, sit- to-stand transitions, balance, and postural control. Machine learning models process these features to classify function into categories of improved, stable, or declined status.
[0024] Security measures can be included to protect sensitive health information through local encryption, secure server transmission, and separation of identifiers. The system architecture supports various hardware configurations including personal computers, tablets, mobile devices, and networked systems. Components include processors, memory, storage, network interfaces, sensors, displays, and input devices.
[0025] The system can implement specific protocols for integration with electronic health records (EHR) and clinical workflows. The wearable device collects continuous data as users move through clinical spaces including check-in areas, hallways, and waiting rooms. This data can be processed and formatted for integration with existing clinical systems. The network
interface device enables connectivity through multiple protocols including frame relay, internet protocol, transmission control protocol, and hypertext transfer protocol. These standardized protocols facilitate data exchange between the assessment system and EHR platforms. The system can support various physical connection types including Ethernet, coaxial, and wireless interfaces to accommodate different clinical network infrastructures.
[0026] For clinical workflow integration, the system can generate objective physical function measurements that can be documented within the EHR. The processed data provides benchmarks and categories of change that align with clinical assessment needs. The system architecture supports integration of these metrics with electronic health records while maintaining HIPAA compliance through secure data transmission and storage protocols. The machine learning analysis pipeline processes collected data into standardized formats for clinical documentation. The system generates physical function assessments including gait speed measurements, sit-to-stand transition counts, balance metrics, and postural control evaluations. These quantitative measurements are formatted for incorporation into patient care plans and clinical documentation systems.
[0027] The integration architecture can allow for longitudinal tracking of patient outcomes through the EHR. The system supports documentation of physical function changes over time, allowing clinicians to implement interventions and monitor progress. The data integration capabilities may facilitate Medicare or third-party payor reimbursement by providing objective documentation of functional assessments and changes. For clinical implementation, the system includes interfaces for device activation and status monitoring by clinical staff. The integration protocols support multi-site data aggregation while maintaining security compliance. The system architecture enables creation of data pipelines for transmitting processed assessments to EHR or cloud-based clinical dashboards.
[0028] Alternative implementations can utilize different sensor configurations, form factors, and processing architectures. The machine learning system supports multiple algorithmic approaches including supervised learning, unsupervised learning, and reinforcement learning techniques. Specific variations may incorporate support vector machines, decision trees, linear classifiers, regression models, or hidden Markov models.
[0029] The system can integrate with clinical workflows through continuous monitoring during visits and connection to electronic health records. Performance evaluation uses technical
metrics including root-mean-square deviation, precision, recall, receiver operating characteristic analysis, and Fl-scorcs. Data storage and transmission capabilities encompass local storage, network protocols, and multiple connection types.
[0030] As shown in the Examples the systems demonstrate favorable technical capabilities. In exemplary implementations, random forests achieved 77.5% accuracy in gait speed prediction with 0.842 weighted precision. Neural networks showed 66.1% accuracy for sit- to-stand classification. Support vector classifiers demonstrated 66.3% accuracy in balance assessment.
[0031] The described technology enables automated collection of objective physical function data during routine clinical visits. The passive monitoring approach eliminates requirements for dedicated testing spaces or personnel. Machine learning analysis provides quantitative assessment of function changes over time. The system architecture supports integration with existing clinical systems while maintaining security compliance.
[0032] FIG. 1 is a series of photographs illustrating a prototype device for geriatric functional assessment, according to some examples.
[0033] The prototype device comprises a badge-sized wearable unit containing multiple integrated sensor components. The housing is constructed using 3D printing techniques and underwent 23 iterations to optimize functionality and usability. The encasing incorporates strategic component placement to achieve proper weight distribution and minimize user discomfort when worn around the neck.
[0034] The device integrates a Portenta Vision Shield board containing a 320x320-pixel grayscale camera sensor, a Portenta H7 board with parallel motion and video processors, and an Adafruit 9-DOF Absolute Orientation IMU sensor. The components interface through an internal connection architecture that enables simultaneous data collection and processing.
[0035] The physical design includes dedicated slots for battery connections, micro-SD card access, and LED status indicators. A hook mechanism at the top of the encasing provides attachment points for the neckband. Curved side features facilitate maintenance access to internal components while maintaining structural integrity.
[0036] The device receives power from a 10,000 mAh battery system. The power distribution network connects to the processing boards and sensors through integrated power
management circuitry. The system may alternatively utilize an internal lithium polymer battery configuration to reduce external components.
[0037] The device can demonstrate integration capabilities with clinical systems through wireless data transmission and secure storage protocols. The physical form factor enables unobtrusive data collection during natural movement through clinical environments while maintaining proper sensor orientation for accurate measurements.
[0038] Described technology examples of geriatric functional assessment systems provide technical solutions to several technical problems. Traditional physical function assessment requires dedicated testing spaces, structured protocols, and trained personnel which limits natural movement analysis. To address this, the system implements a passive wearable device that combines a 320x320-pixel grayscale camera sensor for continuous visual data capture, a 9-DOF IMU sensor with fusion algorithm for stable three-axis acceleration data, parallel motion and video processors for real-time data collection, and a badge-sized form factor enabling unrestricted movement monitoring in clinical spaces.
[0039] Analyzing simultaneous visual and motion data streams requires significant computational resources and complex data integration. The system addresses this through a dualengine architecture that employs a training engine for offline model development using preprocessed input data, a real-time prediction engine operating on the device, feature extraction from multiple data streams including accelerometer, gyroscope, magnetic field, and visual data, and parallel processing capabilities through the Portenta H7 board architecture.
[0040] Converting raw sensor data into meaningful assessments of physical function requires sophisticated analysis techniques. The system implements multiple machine learning approaches including support vector classifiers for high-dimensional data analysis achieving 66.3% accuracy in balance assessment, random forests for non-linear relationship modeling with 77.5% accuracy in gait prediction, deep neural networks for complex pattern recognition with 66.1% accuracy in sit-to-stand classification, and feature extraction incorporating statistical measures across spatial axes.
[0041] Health monitoring systems must maintain data privacy while enabling integration with clinical workflows. The system can address this through local encryption behind firewalls, secure data transmission to approved institutional servers, local processing rather than cloud-
based computing, separation of HIPAA-protected identifiers, and integration capabilities with electronic health records.
[0042] Healthcare monitoring systems must support various hardware configurations and implementation scenarios. The system architecture enables operation across multiple hardware platforms including PCs, tablets, and mobile devices, support for various network protocols and physical interfaces, multiple machine learning algorithm options including supervised, unsupervised, and reinforcement learning, and a modular design supporting sensor and processing alternatives.
[0043] FIG. 2 illustrates a machine learning engine for training and execution related to the GFA device according to various examples. The machine learning engine may be deployed to execute at a mobile device (e.g., a cell phone) or a computer (e.g., an orchestrator server). A system may calculate one or more weightings for criteria based upon one or more machine learning algorithms. FIG. 2 shows an example machine learning engine 400 according to some examples of the present disclosure.
[0044] Machine learning engine 400 uses a training engine 402 and a prediction engine 404. Training engine 402 uses input data 406, for example after undergoing preprocessing component 408, to determine one or more features 410. The one or more features 410 may be used to generate an initial model 412, which may be updated iteratively or with future labeled or unlabeled data (e.g., during reinforcement learning), for example to improve the performance of the prediction engine 404 or the initial model 412. An improved model may be redeployed for use.
[0045] The input data 406 may be health related data as described hereinabove. In the prediction engine 404, current data 414 (e.g., information received in an authentication attempt, such as via an API, which may include biometric data, etc.) may be input to preprocessing component 416. In some examples, preprocessing component 416 and preprocessing component 408 are the same. The prediction engine 404 produces feature vector 418 from the preprocessed current data, which is input into the model 420 to generate one or more criteria weightings 422. The criteria weightings 422 may be used to output a prediction, as discussed further below.
[0046] The training engine 402 may operate in an offline manner to train the model 420 (e.g., on a server). The prediction engine 404 may be designed to operate in an online manner (e.g., in real-time, at a mobile device, on a wearable device, etc.). In some examples, the model
420 may be periodically updated via additional training (e.g., via updated input data 406 or based on labeled or unlabclcd data output in the weightings 422) or based on identified future data, such as by using reinforcement learning to personalize a general model (e.g., the initial model 412) to a particular user.
[0047] Labels for the input data 406 may any suitable clinical term. The initial model 412 may be updated using further input data 406 until a satisfactory model 420 is generated. The model 420 generation may be stopped according to a specified criteria (e.g., after sufficient input data is used, such as 1,000, 10,000, 100,000 data points, etc.) or when data converges (e.g., similar inputs produce similar outputs).
[0048] The specific machine learning algorithm used for the training engine 402 may be selected from among many different potential supervised or unsupervised machine learning algorithms. Examples of supervised learning algorithms include artificial neural networks, Bayesian networks, instance-based learning, support vector machines, decision trees (e.g., Iterative Dichotomiser 3, C9.5, Classification and Regression Tree (CART), Chi-squared Automatic Interaction Detector (CHAID), and the like), random forests, linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, and hidden Markov models. Examples of unsupervised learning algorithms include expectationmaximization algorithms, vector quantization, and information bottleneck method. Unsupervised models may not have a training engine 402. In an example embodiment, a regression model is used and the model 420 is a vector of coefficients corresponding to a learned importance for each of the features in the vector of features 410, 418. A reinforcement learning model may use Q- Learning, a deep Q network, a Monte Carlo technique including policy evaluation and policy improvement, a State-Action-Reward-State-Action (SARSA), a Deep Deterministic Policy Gradient (DDPG), or the like.
[0049] Once trained, the model 420 may output a prediction, analysis, and/or recommendation related to a patient’ s health.
[0050] FIG. 3 illustrates generally an example of a block diagram of a machine 600 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform in accordance with some examples. In alternative embodiments, the machine 600 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 600 may operate in the capacity of a server machine, a client machine,
or both in server-client network environments. In an example, the machine 600 may act as a peer machine in pccr-to-pccr (P2P) (or other distributed) network environment. The machine 600 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
[0051] Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations when operating. A module includes hardware. In an example, the hardware may be specifically configured to carry out a specific operation (e.g., hardwired). In an example, the hardware may include configurable execution units (e.g., transistors, circuits, etc.) and a computer readable medium containing instructions, where the instructions configure the execution units to carry out a specific operation when in operation. The configuring may occur under the direction of the executions units or a loading mechanism. Accordingly, the execution units are communicatively coupled to the computer readable medium when the device is operating. In this example, the execution units may be a member of more than one module. For example, under operation, the execution units may be configured by a first set of instructions to implement a first module at one point in time and reconfigured by a second set of instructions to implement a second module.
[0052] Machine (e.g., computer system) 600 may include a hardware processor 602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 604 and a static memory 606, some or all of which may communicate with each other via an interlink (e.g., bus) 608. The machine 600 may further include a display unit 610, an alphanumeric input device 612 (e.g., a keyboard), and a user interface (UI) navigation device 614 (e.g., a mouse). In an example, the display unit 610, alphanumeric input device 612 and UI navigation device 614 may be a touch screen display. The machine 600 may additionally include a storage device (e.g., drive unit) 616, a signal generation device 618 (e.g., a speaker), a network interface device 620, and one or more sensors 621, such
as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 600 may include an output controller 628, such as a serial (c.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
[0053] The storage device 616 may include a machine readable medium 622 that is non- transitory on which is stored one or more sets of data structures or instructions 624 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 624 may also reside, completely or at least partially, within the main memory 604, within static memory 606, or within the hardware processor 602 during execution thereof by the machine 600. In an example, one or any combination of the hardware processor 602, the main memory 604, the static memory 606, or the storage device 616 may constitute machine readable media.
[0054] While the machine readable medium 622 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 624.
[0055] The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 600 and that cause the machine 600 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
[0056] The instructions 624 may further be transmitted or received over a communications network 626 using a transmission medium via the network interface device 620 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol
(HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 620 may include one or more physical jacks (e.g., Ethernet, coaxial, or phonejacks) or one or more antennas to connect to the communications network 626. In an example, the network interface device 620 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 600, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
EXAMPLES
Example 1
[0057] Physical function in primary care. As older adults visit primary care clinics -3.6 times yearly, primary care providers (PCP) are ideally positioned to regularly monitor physical function longitudinally. PCPs may notice subtle or incremental changes in a patient’s physical health - yet 66% underestimate physical function on inquiry. Objective monitoring of changes can accurately identify a need to intervene and plan for future health needs. This lowers costs, improves clinical efficiency, and facilitates patient-centered decisions. Measuring physical function provides patients and clinicians useful data to implement treatment plans.
[0058] Measuring physical function is challenging in clinical care. PCP’ s busy schedules, their competing demands, and a lack of reimbursement, preclude in-depth assessments of function. Trained personnel and space are needed to assess walking speed and sit-to-stand, while static postural control can only definitively be measured using formal tools, systems, or software. Function is documented in free-text as pail Medicare’s annual wellness visits. Self-
report surveys are variably completed with results that lag behind objective measures. There is a need to assess physical function using practical, low-cost, and automated methods [0059] Wearable monitoring devices hold promise to improve healthcare efficiency. Collecting unobtrusive, continuous, and objective digital health markers can improve blood pressure, blood sugar control, or sleep. Remote patient monitoring (RPM) can automate data collection and may: (a) support patient engagement by improving a clinician’s ability to track patients’ status and respond to change; (b) furnish clinicians with ongoing data to alter treatment; and (c) offer patients data for motivation. Integrating RPM into routine care can overcome a need for staff to conduct measures
[0060] Context for assessing function. Mobility is a complex task affected by footwear, types of activity, and gait patterns. Evaluating movement, balance, or strength in controlled settings using walkways or force-plated instrumented treadmills often follow structured protocols. Testing requires time, patient and staff effort, and space if performed in clinics. Inferring function from spontaneous movements - moving without restrictions - can overcome limitations by integrating contextual data (walking slowly in crowds). Deep neural networks can include time-aligned motion and visual data using multi-modal and transfer learning to implement effective learning. Clinics are optimal sites to collect contextual data for testing, as home settings have different footprints, logistic barriers, require installation and upkeep, and face privacy concerns. Clinic wearables capturing spontaneous movement can infer function using deep neural networks while maximizing usability However, as previously stated, the system can be used to collect contextual data in any other setting outside of the standard clinical setting described herein..
[0061] Prototype Functionality & Assembly: The GFA will continuously monitor motion and video data as users freely move about in common areas of the clinic (e.g., check-in, hallways, waiting room), and interact with staff or surrounding objects. Data will be securely uploaded to a server. Continuous, passive data collection will not affect clinic workflows nor will it rely on location- or structure-dependent ambient sensors (e.g., surveillance cameras).
Commercially available sensors and components can be used to permit reliable, easy, and scalable assembly and data collection: (i) Portenta Vision Shield board - a 320x320-pixel grayscale camera sensor to capture and analyze video in real-time; (ii) Portenta H7 boardmotion and video processors running in parallel will perform real-time, machine learning tasks
when interacting with users; (iii) an Adafruit 9-DOF Absolute Orientation Inertial Measurement Unit sensor is equipped with a fusion algorithm generating stable three-axis acceleration data. A 3D-printer will build a secure, badge-sized casing containing the components. An example of the prototype is shown in FIG. 1.
[0062] Data Analysis: Usability Measures: Descriptive statistics will summarize user attributes and responses with incremental analyses conducted as in iterative design processes. Mixed-methods using a convergent parallel design with integration at the interpretation stage permits critical feedback loops. A codebook will be developed based on converging themes and informed by the grounded theory approach. Codes will be assigned to text, grouped, and checked for themes and will consist of inductively derived codes from the data. Segments will be iteratively labeled segments reflecting experiences with the GFA system. Results will be merged to use graphic displays to compare findings, and examine concordance between self-reported and feasibility metrics.
[0063] Model Design: Handcrafted motion features are considered in light of factors such as a histogram of gradient, statistical features, and wavelet energy, and automated motion features from recurrent neural networks. Handcrafted visual features, such as spatio-temporal interest points, feature points between frames, and automated visual features using pre-trained models on the first-person video datasets such as, EPIC-Kitchens and Stanford-ECM Dataset will be adopted. After feature extraction, both motion and visual features will be inputted into an attention module to output attention weights that reflect their relevance to physical function. These weights will be inputted into an attention pooling component to generate an embedding. Three fully connected layers will convert the embedding to walking speed, sit-to-stand repetitions, and postural control. A weakly supervised learning technique will be used to locate activities in time, using untrimmed time-series motion and visual features; function values are labels of the per-visit data. Either single- or multi-task learning will be used by separating or combining the loss functions. To improve performance, models will be initialized using manual attention labels, including sitting, standing, and moving, annotated by the RA based on surveillance videos. Training using manual labels is a fully supervised learning strategy, which will not be used in the product but critical in initializing and evaluating models.
[0064] Accuracy Analysis: Accuracy of GFA inferring physical function will use rootmean-square deviation. The inference accuracy (ground truth being objective physical function
measures) will be evaluated using precision, recall, area under the receiver operating characteristic and precision-recall curves, and Fl-scorc. If accurate, a visualization technique will be developed to show segments of data and the visit. Visualization will allow a trend analysis using graphs, refining the model, and once finalized, will provide clinically useful data. If the model’s outcome is inaccurate, false positives and negatives will be analyzed, investigate the possible causes of failure, and explore additional steps to improve the utility of the data and the accuracy of the model.
[0065] Physical function measures rely on staff, clinic space, time, or self-report surveys that are time-consuming and subject to workflow barriers and reporting bias. The disclosed system will change clinical practice by: (1) developing a device that unobtrusively collects motion and visual data to create an accurate, multi-modal deep neural network model to assess function and provide data to clinicians; (2) including older adults who are often excluded from technology-based studies, underrepresented in clinical research, and burdened by medical complexities; (3) using a robust mixed-methods study design to develop a device that requires little training and installation to use and implement, and that can be easily adapted for other uses; (4) creating an affordable system that can substitute in-person oversight and overcome barriers to care allowing easy scalability to low resource areas; and (5) closing the gap between clinicians and patients by collecting data, using predictive analytics, and integrating with electronic health records, all which have potential for reimbursement
[0066] GFA will allow clinicians to identify patients who would benefit from healthy aging interventions to enhance their independence. A competitive landscape exists - there are a lack of clinic -based systems that collect physical function data compounded by a critical gap in the availability of patient-level data - that leaves clinicians unable to unobtrusively and objectively assess physical function. The GFA system addresses a service gap of using technology to overcome barriers faced in routine care... Healthy aging represents a growing clinical challenge to achieve, and the systems described herein can help to achieve healthy aging..
Example 2
Methods
[0067] Study Setting/Desigrr. A single-arm, non-randomized, prospective, mixed- methods, usability study was used to evaluate GFAS in a primary care clinical. The purpose of this single-visit study was to provide insights into its design and ergonomics, and to collect data to develop the necessary machine learning algorithms to infer physical function.
[0068] Recruitment/Sampling: A convenience sample was recruited from a Geriatrics Clinic at the. The clinic serves over 3,600 patients with a mean age of 80. Eligible participants were aged over 65 years with at least two chronic medical conditions (alcohol abuse;
Alzheimer’s disease and related dementia; heart failure; arthritis (osteoarthritis or rheumatoid); chronic hepatitis; asthma; HIV/AIDS; atrial fibrillation; hyperlipidemia; autism spectrum disorder; hypertension; cancer; ischemic heart disease; chronic kidney disease; osteoporosis; chronic obstructive pulmonary disease; schizophrenia or other psychotic disorders; depression; stroke; or diabetes).
[0069] Study Procedures: Study visits occurred at a multispecialty clinic building. All participants received a brief, interactive session on how to use the device. The device was then placed on participants (see details below) and the device was turned “on” while the participant walked around the clinic environment for 15 minutes. At its conclusion, physical function assessments were performed along with an exit interview.
[0070] Device Prototype (Figure 1): The objective of designing the prototype was to assess and gain information on physical function through motion- and visual-based user data. Motion data collection was facilitated by an Arduino Inertial Measurement Unit (IMU) shield mounted on an Arduino MKR Zero board, which was integrated with a micro-SD card to store the captured data. The IMU shield recorded four types of motion data — accelerometer, gyroscope, magnetic field, and Euler angles — allowing for a detailed analysis of user movements and orientation. For visual data, an Arduino Portenta H7 board was paired with an Arduino Vision Shield, which featured a high-resolution camera module. The Portenta H7 processed and stored images on a micro-SD card, ensuring accurate and readily accessible visual data. The use of Arduino-based hardware provided a flexible and customizable platform, enhancing the device’s robustness and versatility. To support prolonged use, the device was powered by a 10,000 mAh battery.
[0071] To house these components, the encasing was designed using 3D printing and underwent 23 iterations to optimize functionality and usability. Since the device is worn around
the neck, achieving proper weight distribution was a key consideration. To ensure balance and minimize discomfort, the chips were strategically placed on opposite sides of the encasing. The design also incorporated practical features, including slots for battery connections, micro-SD card exchange, and LED indicators for various operations. A hook at the top allowed for neckband attachment, while curved side features facilitated easy access to internal components. [0072] Objective Measures'. In addition to conducting vital signs (height, weight, blood pressure, oxygen saturation, respiratory rate), three physical function assessments with participants were determined including: (i) Walking speed: In an 8m course, the time to walk 4m with a participant’ s usual walking speed and stride was measured. Participants started walking for the first 2m, at which the stopwatch stopped timing and ended at the 6m walk - there was a 2m walking phase that was not timed at the end. This provided a run-in and run-out period, and the mean of three trials was recorded; (ii) 30-second sit-to-stand: participants sat in a chair with arms folded. The number of repetitions of standing up and returning to sit over 30 seconds was recorded. A repetition was counted once the participant’ s bottom touched the seat of the chair; (iii) Postural Balance: A Footscan® Entry Level V9 1 -meter Pressure Plate System (Leuven, Belgium) measured each participant’s center of pressure in millimeters (mm) during a double-leg stance. The Center of Pressure was recorded along both the X (side-to-side) and Y (front-to- back) directions. Additionally, the total distance traveled by the Center of Pressure was measured in mm and the ellipse area (EA) in square millimeters (mm2), which represents the area enclosing the total Center of Pressure trajectory. Data were collected for 30 seconds, with a start delay of 5 s per trial, and three trials were performed. Participants stood barefoot on the platform with feet 5 cm apart at the heels and with the toes angled out forming about a 30-degree angle. Participants looked straight ahead at an image of the correct foot placement posted on the wall. Data were stored in a HIPAA-compliant database.
[0073] Frailty Status: Each individual’s Phenotypic frailty status was evaluated using Fried’s criteria based on a mixture of subjective and objective measures, aligning with our previous methods (3). Criteria included: (a) gait speed: <0.8 m/s; (b) low activity: based on a physical activity compared to peers their age; (c) exhaustion: whether participants felt tired during the day; (d) weakness: measured if tired over the past month; and (e) weight loss of >10% in the past 6 months - as not all participants had weight at 6-months, the weight closest to the 6-
month date was used. Participants were categorized as frail if they fulfilled three criteria, prefrail with one or two criteria, and robust if no criteria were fulfilled.
[0074] Questionnaires: Participants were asked how often they would come to clinic for evaluation and enquired by posing single Likert questions on the device’s overall acceptability and preference (Likert 1-10, low to high). To assess the system’s usability, participants completed three self-reported questionnaires on the use of the device: (1) The USE questionnaire is a validated tool that evaluates the user experience across domains of usefulness, satisfaction, ease of use, and ease of learning of using a system. This questionnaire was adapted to exclude items that were not applicable to this specific technology-based system. (2) Technology Acceptance is a questionnaire that helps evaluate the acceptability of a given system to participants and whether they would be willing to use it; and (3) The System Usability Scale is often used on small samples for technology usability (range 0-100) to evaluate a system’s ease of use. A one-item Willingness to Pay question determined the relative value participants would place on the device in return for information on their health. Likert scales (1-10, low to high), were also used to evaluate privacy concerns and concerns regarding integration into the electronic health record All questionnaires were completed through REDCap, an electronic data entry capture system.
[0075] Exit Interviews. All interviews were conducted by a research assistant and audiorecorded. They were transcribed by a professional transcription service and uploaded into a qualitative data analysis program, Dedoose, for coding. Questions focused on device acceptability, barriers to use, ergonomics (e.g., comfort), and interactions with the study personnel. Interviews lasted -10 minutes consisting of semi-structured questions, clarifying probes in which participants were encouraged to elaborate upon open-ended questions. All recordings were de-identified upon transcription and aggregated, stored on password-protected, institutional servers.
[0076] Statistical Analysis: All analyses were conducted in both Python and in Microsoft Excel 365 (Redmond, WA). Data was exported from REDCap and aggregated into a single dataset. Descriptive characteristics as mean ± standard deviation and count (percentage) and median and range are presented where appropriate. The primary outcomes of the study were measures of usability (USE questionnaire, System Usability, Technology Acceptance). For data analysis, three machine learning models evaluated the accuracy of each in inferring changes in
physical function status (improvement, stability, or decline). All interview data were transcribed, coded and summarized.
[0077] Machine learning Analysis: Support vector classifiers (SVCs), random forests (RFs), and deep neural networks (DNNs) for motion-based classification tasks were used. SVCs were selected for their ability to handle high-dimensional data and identify optimal decision boundaries, particularly in non-linearly separable cases. RFs were included for their capacity to model non-linear relationships and provide feature importance insights, while DNNs were chosen for their effectiveness in learning complex hierarchical patterns from large, variable datasets. Initial comparative testing against simpler models, such as k-nearest neighbors and logistic regression, confirmed the superior performance of these selected methods.
[0078] The models were trained using features extracted from accelerometer, gyroscope, Euler angle, and magnetic field data, incorporating statistical measures like mean, variance, skewness, and kurtosis across the XYZ axes. Key clinical outcomes assessed included gait speed, sit-to-stand transitions, and balance. Data processing involved computing mean values within predefined windows to capture relevant movement patterns, classifying physical function into three categories; improved, stable, or declined. Improvement was characterized by increased gait speed, more sit-to-stand repetitions, and reduced travel distances, while a decline reflected slower gait, fewer transitions, and increased travel distances
[0079] For robust model evaluation, the dataset was split into 70% training, 15% validation, and 15% testing, ensuring a balanced assessment of performance. Due to the structured data pairing approach, additional cross-validation was unnecessary, as it effectively expanded the dataset and minimized reliance on data augmentation techniques. This strategy enhanced variability across subsets, strengthening the models’ robustness without requiring additional sampling methods.
Results
[0080] Of the 37 screened participants, 21 were enrolled. Of the 16 participants who did not enroll, all fulfilled our selection criteria. Despite expressing interest, three did not return our phone calls, two declined to participate in research activities, and the remaining did not want to engage with technology. Mean age was 76.6±5.5 year's. Approximately half (52.4%) were female, 28.6% were non-White, 19% were on Medicaid, and 47.6% had a household income
<$50,000. Approximately 52.4% were classified as robust, with a mean walking speed of 1.13±0.27 m/s and 30-sccond sit-to-stand repetitions of 11.38±4.34 (median 11). Overall, participants favored the prototype and its existing functionality (7.14±2.35, range, low to high, 1- 10). Participants were willing to use this device if they were to attend clinic every few months (n=14 [66.6%]) to assess their physical function. Individuals did not believe they would need more practice to use the device (4.29+3.57, 1-10 scale, no need to high need), and would feel comfortable using this device, even at home, if recommended by their clinician (7.62+2.50, median 8.0, 1-10 scale, low to high comfort).
[0081] Participants had mixed feelings as to whether they would consider wearing such a device continuously outside of the clinic (5.8+2.89, 1-10 scale, low desire to high). Overall, participants expressed the need for more information to understand its use and noted that additional design engineering features were needed to make it more aesthetically pleasing.
[0082] Importantly, all participants felt it was easy to use, 74% of comments outlined they would use it again, and 81% noted it was comfortable.
[0083] It is noted that there was variation in the USE questionnaire (Usefulness, Satisfaction, and Ease of Use). While the overall score was marginally above average at 5.83 ± 3.59 (1-10, low to high), higher scores were found in ease of use and learning, and lower scores in usefulness. Technology acceptance demonstrated different results in satisfaction, with favorable results in confidence, usefulness, efficiency, and overall attitudes
[0084] System Usability Score which is important in the design engineering features of newly designed systems was extremely promising at a 78.2+14.5, exceeding the industry standard of 68.(20)
Design Engineering Evaluation
[0085] During testing, a flipping issue was observed in one instance, resulting in the loss of some image data. To address this, further refinements will be made to the encasing’ s dynamics to reduce the likelihood of flipping. Additionally, transitioning to an internal lithium polymer (LiPo) battery will eliminate the need for an external power supply, allowing for a more compact and lightweight design. By integrating the battery, chips, and other components into a single encased unit, these modifications can enhance the device’s portability and usability, making it more suitable for long-term real-world applications.
[0086] Machine learning results: Model performance was assessed across three classification tasks — gait-speed prediction, sit-to-stand classification, and balance assessment. For gait-speed prediction, RF performed the best with an accuracy of 0.775, weighted precision of 0.842, weighted recall of 0.775, and weighted Fl-score of 0.794. A deep neural network (DNN) analysis followed with an accuracy of 0.713, weighted precision of 0.719, weighted recall of 0.713, and weighted Fl -score of 0.698. SVC had the lowest performance with an accuracy of 0.677, weighted precision of 0.634, weighted recall of 0.677, and weighted Fl-score of 0.654.
[0087] For sit-to-stand classification, RF had the lowest accuracy at 0.565, with a weighted precision of 0.708, weighted recall of 0.565, and a weighted Fl-score of 0.605. NN outperformed both models with an accuracy of 0.661, weighted precision of 0.611, weighted recall of 0.661, and weighted Fl-score of 0.633. SVC had intermediate performance, achieving an accuracy of 0.654, weighted precision of 0.594, weighted recall of 0.654, and weighted Flscore of 0.621.
[0088] For balance assessment, SVC had the highest accuracy at 0.663, with a weighted precision of 0.683, weighted recall of 0.663, and a weighted Fl-score of 0.648. NN closely followed with an accuracy of 0.646, weighted precision of 0.666, weighted recall of 0.646, and weighted Fl-score of 0.636. RF had the lowest accuracy at 0.585, with a weighted precision of 0.701, weighted recall of 0.584, and a weighted Fl-score of 0.624.
[0089] Overall, RF performed best for gait-speed prediction, NN excelled in sit-to-stand classification, and SVC was the top performer for balance assessment.
[0090] The usability study demonstrates the feasibility of deploying a geriatric functional assessment system in collecting objective physical function data while concurrently generating a positive user experience among an older adult population within an ambulatory clinic setting.
[0091] These findings support the further development and optimization of this device to improve its ergonomics, validity, and demonstrate its efficacy.
Exemplary Aspects.
[0092] The following exemplary embodiments are provided, the numbering of which is not to be construed as designating levels of importance:
[0093] Aspect 1 . A geriatric functional assessment system comprises: a wearable device comprising: a camera sensor; a motion sensor; a processor; and a memory storing instructions that, when executed by the processor, cause the system to: collect visual data from the camera sensor; collect motion data from the motion sensor; and analyze the collected data using a machine learning model to assess physical functioning.
[0094] Aspect 2. The system of aspect 1, wherein the machine learning model comprises a deep neural network.
[0095] Aspect 3. The system of aspect 1, wherein the motion sensor comprises an inertial measurement unit providing accelerometer, gyroscope, and magnetic field data.
[0096] Aspect 4. The system of aspect 1, wherein analyzing the collected data comprises: extracting features from the visual and motion data; inputting the extracted features into the machine learning model; and generating a physical function assessment based on the machine learning model output.
[0097] Aspect 5. The system of aspect 4, wherein the physical function assessment comprises at least one of: a gait speed measurement; a sit-to-stand transition count; a balance assessment; and a postural control measurement.
[0098] Aspect 6. The system of aspect 1, further comprising: a secure data transmission module configured to encrypt and transmit the collected data to a protected server.
[0099] Aspect 7. The system of aspect 1, wherein the machine learning model is trained using labeled physical function data from multiple subjects.
[00100] Aspect 8. The system of aspect 1, wherein the system is configured to: track changes in physical function over time; and generate alerts when physical function changes exceed predetermined thresholds.
[00101] Aspect 9. The system of aspect 1, wherein the wearable device is configured to be worn as a badge during clinical visits.
[00102] Aspect 10. A method for assessing physical function comprising: collecting visual data and motion data using a wearable device; processing the collected data to extract features; analyzing the extracted features using a machine learning model; and generating a physical function assessment based on the analysis.
[00103] Aspect 1 1 . The method of aspect 10, wherein the machine learning model comprises at least one of: a support vector classifier; a random forest classifier; and a deep neural network.
[00104] Aspect 12. The method of aspect 10, further comprising: encrypting the collected data; transmitting the encrypted data to a secure server; and storing the encrypted data in compliance with privacy regulations.
[00105] Aspect 13. The method of aspect 10, wherein generating the physical function assessment comprises: calculating at least one of gait speed, sit-to-stand transitions, or balance metrics; comparing the calculated metrics to baseline measurements; and determining changes in physical function over time.
[00106] Aspect 14. The method of aspect 10, further comprising: integrating the physical function assessment with an electronic health record system.
[00107] Aspect 15. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to: receive visual and motion data from a wearable device; process the received data using a machine learning model; and generate a physical function assessment based on the processed data.
[00108] Aspect 16. The computer-readable medium of aspect 15, wherein the instructions further cause the processor to: extract features from the visual and motion data; input the extracted features into multiple machine learning models; and combine outputs from the multiple models to generate the physical function assessment.
[00109] Aspect 17. The computer-readable medium of aspect 15, wherein the instructions further cause the processor to: track changes in physical function metrics over time; and generate alerts when detected changes exceed predetermined thresholds.
[00110] Aspect 18. A system for monitoring physical function comprising: a wearable device configured to collect motion and visual data; and a processor configured to: analyze the collected data using machine learning; generate physical function metrics; and track changes in the metrics over time.
[00111] Aspect 19. The system of aspect 18, wherein the processor is further configured to: encrypt the collected data; transmit the encrypted data to a secure server; and integrate the physical function metrics with electronic health records.
[00112] Aspect 20. The system of aspect 18, wherein the machine learning analysis comprises: feature extraction from motion and visual data; classification using multiple machine learning models; and generation of physical function assessments based on model outputs.
Claims
1. A geriatric functional assessment system comprises: a wearable device comprising: a camera sensor; a motion sensor; a processor; and a memory storing instructions that, when executed by the processor, cause the system to: collect visual data from the camera sensor; collect motion data from the motion sensor; and analyze the collected data using a machine learning model to assess physical functioning.
2. The system of claim 1, wherein the machine learning model comprises a deep neural network.
3. The system of claim 1, wherein the motion sensor comprises an inertial measurement unit providing accelerometer, gyroscope, and magnetic field data.
4. The system of claim 1, wherein analyzing the collected data comprises: extracting features from the visual and motion data; inputting the extracted features into the machine learning model; and generating a physical function assessment based on the machine learning model output.
5. The system of claim 4, wherein the physical function assessment comprises at least one of: a gait speed measurement; a sit-to-stand transition count; a balance assessment; and a postural control measurement.
6. The system of claim 1 , further comprising: a secure data transmission module configured to encrypt and transmit the collected data to a protected server.
7. The system of claim 1, wherein the machine learning model is trained using labeled physical function data from multiple subjects.
8. The system of claim 1, wherein the system is configured to: track changes in physical function over time; and generate alerts when physical function changes exceed predetermined thresholds.
9. The system of claim 1, wherein the wearable device is configured to be worn as a badge during clinical visits.
10. A method for assessing physical function comprising: collecting visual data and motion data using a wearable device; processing the collected data to extract features; analyzing the extracted features using a machine learning model; and generating a physical function assessment based on the analysis.
11. The method of claim 10, wherein the machine learning model comprises at least one of: a support vector classifier; a random forest classifier; and a deep neural network.
12. The method of claim 10, further comprising: encrypting the collected data; transmitting the encrypted data to a secure server; and storing the encrypted data in compliance with privacy regulations.
13. The method of claim 10, wherein generating the physical function assessment comprises:
calculating at least one of gait speed, sit-to-stand transitions, or balance metrics; comparing the calculated metrics to baseline measurements; and determining changes in physical function over time.
14. The method of claim 10, further comprising: integrating the physical function assessment with an electronic health record system.
15. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to: receive visual and motion data from a wearable device; process the received data using a machine learning model; and generate a physical function assessment based on the processed data.
16. The computer-readable medium of claim 15, wherein the instructions further cause the processor to: extract features from the visual and motion data; input the extracted features into multiple machine learning models; and combine outputs from the multiple models to generate the physical function assessment.
17. The computer-readable medium of claim 15, wherein the instructions further cause the processor to: track changes in physical function metrics over time; and generate alerts when detected changes exceed predetermined thresholds.
18. A system for monitoring physical function comprising: a wearable device configured to collect motion and visual data; and a processor configured to: analyze the collected data using machine learning; generate physical function metrics; and track changes in the metrics over time.
19. The system of claim 18, wherein the processor is further configured to: encrypt the collected data; transmit the encrypted data to a secure server; and integrate the physical function metrics with electronic health records.
20. The system of claim 18, wherein the machine learning analysis comprises: feature extraction from motion and visual data; classification using multiple machine learning models; and generation of physical function assessments based on model outputs.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463566568P | 2024-03-18 | 2024-03-18 | |
| US63/566,568 | 2024-03-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025199137A1 true WO2025199137A1 (en) | 2025-09-25 |
Family
ID=97140158
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/020422 Pending WO2025199137A1 (en) | 2024-03-18 | 2025-03-18 | Geriatric functional assessment system using passive wearabl sensing and deep learning |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025199137A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120191147A1 (en) * | 1996-12-16 | 2012-07-26 | Rao Raman K | Electronic skin patch for real time monitoring of cardiac activity and personal health management |
| US20190209022A1 (en) * | 2018-01-05 | 2019-07-11 | CareBand Inc. | Wearable electronic device and system for tracking location and identifying changes in salient indicators of patient health |
| US20210151179A1 (en) * | 2017-08-03 | 2021-05-20 | Rajlakshmi Dibyajyoti Borthakur | Wearable device and iot network for prediction and management of chronic disorders |
-
2025
- 2025-03-18 WO PCT/US2025/020422 patent/WO2025199137A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120191147A1 (en) * | 1996-12-16 | 2012-07-26 | Rao Raman K | Electronic skin patch for real time monitoring of cardiac activity and personal health management |
| US20210151179A1 (en) * | 2017-08-03 | 2021-05-20 | Rajlakshmi Dibyajyoti Borthakur | Wearable device and iot network for prediction and management of chronic disorders |
| US20190209022A1 (en) * | 2018-01-05 | 2019-07-11 | CareBand Inc. | Wearable electronic device and system for tracking location and identifying changes in salient indicators of patient health |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Cicirelli et al. | Ambient assisted living: a review of technologies, methodologies and future perspectives for healthy aging of population | |
| Martínez-Villaseñor et al. | UP-fall detection dataset: A multimodal approach | |
| Chung et al. | Sensor data acquisition and multimodal sensor fusion for human activity recognition using deep learning | |
| Gao et al. | The dilemma of analyzing physical activity and sedentary behavior with wrist accelerometer data: challenges and opportunities | |
| Waheed et al. | NT-FDS—a noise tolerant fall detection system using deep learning on wearable devices | |
| Morshed et al. | Deep osmosis: Holistic distributed deep learning in osmotic computing | |
| Eichler et al. | Automatic and efficient fall risk assessment based on machine learning | |
| WO2017147552A9 (en) | Multi-format, multi-domain and multi-algorithm metalearner system and method for monitoring human health, and deriving health status and trajectory | |
| Bravo et al. | M-health: lessons learned by m-experiences | |
| Jeng et al. | A wrist sensor sleep posture monitoring system: An automatic labeling approach | |
| Dong et al. | Towards whole body fatigue assessment of human movement: A fatigue-tracking system based on combined semg and accelerometer signals | |
| Yoo et al. | A frequency pattern mining model based on deep neural network for real-time classification of heart conditions | |
| US20210375459A1 (en) | Method and system enabling digital biomarker data integration and analysis for clinical treatment impact | |
| Leone et al. | Human postures recognition by accelerometer sensor and ML architecture integrated in embedded platforms: Benchmarking and performance evaluation | |
| Ogundokun et al. | Hybrid inceptionv3-svm-based approach for human posture detection in health monitoring systems | |
| Monge et al. | AI-based smart sensing and AR for gait rehabilitation assessment | |
| Alam et al. | Web of objects based ambient assisted living framework for emergency psychiatric state prediction | |
| Yen et al. | A clinical perspective on bespoke sensing mechanisms for remote monitoring and rehabilitation of neurological diseases: scoping review | |
| Liu et al. | Health care data analysis and visualization using interactive data exploration for sportsperson | |
| Leone et al. | Ambient and wearable sensor technologies for energy expenditure quantification of ageing adults | |
| Moghbelan et al. | A smart motor rehabilitation system based on the internet of things and humanoid robotics | |
| Channa et al. | Cloud-Connected Bracelet for Continuous Monitoring of Parkinson’s Disease Patients: Integrating Advanced Wearable Technologies and Machine Learning | |
| Sanchez-Fernandez et al. | A computer method for pronation-supination assessment in Parkinson’s disease based on latent space representations of biomechanical indicators | |
| Khattak et al. | Towards smart homes using low level sensory data | |
| Li et al. | Internet of things-based smart wearable system to monitor sports person health |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25774368 Country of ref document: EP Kind code of ref document: A1 |