[go: up one dir, main page]

WO2025219540A1 - System and method for determining physiological parameters of subject from biophotonic signals using machine learning - Google Patents

System and method for determining physiological parameters of subject from biophotonic signals using machine learning

Info

Publication number
WO2025219540A1
WO2025219540A1 PCT/EP2025/060685 EP2025060685W WO2025219540A1 WO 2025219540 A1 WO2025219540 A1 WO 2025219540A1 EP 2025060685 W EP2025060685 W EP 2025060685W WO 2025219540 A1 WO2025219540 A1 WO 2025219540A1
Authority
WO
WIPO (PCT)
Prior art keywords
machine learning
data
subject
learning models
sounds
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2025/060685
Other languages
French (fr)
Inventor
Dilip Rajeswari
Lucrezia Maria Elisabetta CESTER
Ghena HAMMOUR
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lighthearted Ai Health Ltd
Original Assignee
Lighthearted Ai Health Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lighthearted Ai Health Ltd filed Critical Lighthearted Ai Health Ltd
Publication of WO2025219540A1 publication Critical patent/WO2025219540A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/021Measuring pressure in heart or blood vessels
    • A61B5/02108Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/021Measuring pressure in heart or blood vessels
    • A61B5/02108Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics
    • A61B5/02125Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics of pulse wave propagation time
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Definitions

  • the present disclosure relates generally to physiological monitoring systems and more particularly to non-invasive system and method for determining physiological parameters of a subject from biophotonic signals using machine learning analysis.
  • the present disclosure relates generally to physiological monitoring and, more particularly, to a system and method for non-invasively determining physiological parameters of a subject by analyzing dynamic biophotonic signals obtained from biological tissue using one or more machine learning models.
  • the present disclosure relates to a system and method for providing accurate, non-invasive extraction of a wide range of bio-vitals or physiological parameters, using signal processing and machine learning techniques. Further, the present disclosure relates to a computer program that includes instructions for carrying out the method, when the computer program is executed on a computer system.
  • a system for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models comprises an optical device configured to emit coherent laser light on one or more body regions of a subject and an image-capturing device configured to capture reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject.
  • the one or more body regions comprise head, neck, chest, back, stomach, hand, leg area, or combinations thereof.
  • the system further comprises a server communicatively connected to the image-capturing device.
  • the server comprises a memory storing a database and a set of modules; and a processor configured to execute the set of modules to: obtain, from the image-capturing device, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantify, using a motion description model, motion in consecutive frames of the reflected light images or the videos; convert data characterizing quantified motion into a time series vibration data; apply one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof
  • the processor is configured to determine the heart sound segments, the respiratory sound segments, the physiological velocity segments using one or more first machine learning models.
  • the one or more first machine learning models are trained using a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative of at least one of: (a) a heart sound class, including SI, S2, murmur, or abnormal heart sound; (b) a respiratory sound class, including wheeze, crackle, or normal breath sound; and (c) other physiological velocity segment value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
  • the processor is configured to reconstruct, using one or more second machine learning models, the ECG signal based on the from the filtered timeseries vibration data.
  • the one or more second machine learning models are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region, and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals.
  • the reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform.
  • the processor is configured to determine, using the one or more third machine learning models, the respiration rate of the subject based on the from the filtered time-series vibration data.
  • the one or more third machine learning models are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity, and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments.
  • the processor is configured to determine, using the one or more fourth machine learning models, the heart rate of the subject based on the from the filtered time-series vibration data.
  • the one or more fourth machine learning models are trained using labeled datasets comprising: vibration-based time-series signals corresponding to cardiac-induced motion or vibrations from the subject’s body, and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
  • ECG electrocardiogram
  • the optical device emits coherent laser light at wavelengths ranging from 400 nanometers (nm) to 2500 nm, with a power output between 0.1 milliwatts (mW) and 5 mW.
  • the vibration data is acquired at a sampling frequency in a range of approximately 50 hertz (Hz) to at least 400 Hz, and up to 10000 Hz.
  • the motion description model employs at least one of an optical flow method, a block matching algorithm, a phase-based method, a gradient-based method or a feature-based method.
  • the processor is configured to standardize the time series vibration data to have zero mean and unit variance.
  • the processor is configured to segment the filtered vibration components before processing to determine the physiological velocity data, wherein one or more vibration segments are of equal or variable length, and wherein the variable-length segments are selected to have a duration within a range of 2 to 10 seconds.
  • the processor is configured to segment and label the low and high-frequency components of the time series vibration data using statistical heuristics-based methods, semi-Bayesian methods, or deep learning-based segmentation models.
  • the processor determines the at least one physiological velocity data by extracting, from the filtered time-series vibration data, statistical features including one or more of mean, median, variance, standard deviation, skewness, or kurtosis; non-linear entropy features including one or more of Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, permutation entropy, or spectral entropy; applying a feature selection technique comprising one or more of low variance filtering, high correlation filtering, random forest-based selection, or forward feature selection on extracted features; and determining, using the one or more machine learning models, at least one physiological parameter based on the selected features.
  • a method for determining non- invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models comprises: emitting, using an optical device, coherent laser light on one or more body regions of a subject; capturing, using an image-capturing device, reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject; obtaining, from the image-capturing device, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantifying, using a motion description model, motion in consecutive frames of the reflected light images or the videos; converting data characterizing quantified motion into a time series vibration data; applying one or more bandpass filters on the timeseries vibration data to isolate frequency components correspond to one or more physiological parameters, the bandpass filters selectively pass frequency components pertinent to the one or more
  • the method determines the heart sound segments, the respiratory sound segments, the physiological velocity segments data using one or more first machine learning models.
  • the one or more first machine learning models are trained using a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative of at least one of: (a) a heart sound class, including SI, S2, murmur, or abnormal heart sound; (b) a respiratory sound class, including wheeze, crackle, or normal breath sound; and (c) other physiological velocity value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
  • the method reconstructs, using one or more second machine learning models, the ECG signal based on the from the filtered time-series vibration data.
  • the one or more second machine learning models are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals.
  • the reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform.
  • the method determines, using the one or more third machine learning models, the respiration rate of the subject based on the from the filtered time-series vibration data.
  • the one or more third machine learning models are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments.
  • the method determines, using the one or more fourth machine learning models, the heart rate of the subject based on the from the filtered timeseries vibration data.
  • the one or more fourth machine learning models are trained using labeled datasets comprising: vibration-based time-series signals corresponding to cardiac- induced motion or vibrations from the subject’s body and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
  • ECG electrocardiogram
  • pulse oximeter devices such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
  • the method determines the at least one physiological parameter by extracting, from the from the filtered time-series vibration data, statistical features including one or more of mean, median, variance, standard deviation, skewness, or kurtosis; non-linear entropy features including one or more of Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, permutation entropy, or spectral entropy; applying a feature selection technique comprising one or more of low variance filtering, high correlation filtering, random forest-based selection, or forward feature selection on extracted features; and determining, using the one or more machine learning models, at least one physiological parameter based on the selected features.
  • a computer program product acomprising a non-transitory computer-readable storage medium having computer-readable instructions stored thereon, computer-readable instructions being executable by a computerized device comprising processing hardware to execute a method of determining non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models, wherein the method comprises: The method comprises: emitting, using an optical device, coherent laser light on one or more body regions of a subject; capturing, using an image-capturing device, reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject; obtaining, from the imagecapturing device, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantifying, using a motion description model, motion in consecutive frames of the reflected light images or the videos; converting
  • the method, system, and computer program provide several benefits by analyzing subtle dynamic patterns (e.g., quantified motion, speckle variations) within reflected biophotonic signals, rather than relying solely on conventional methods like photoplethysmography (PPG) amplitude or pulse timing.
  • PPG photoplethysmography
  • the system and method enable extraction of richer and potentially more accurate physiological information non-invasively.
  • the processing technique involving quantifying tissue surface motion dynamics (velocity, pressure) from light signals, establishes a pathway to derive complex vitals, such as detailed heart sounds (S1-S4, murmurs), respiratory sounds, and indicators of turbulence, which are challenging for existing non-invasive optical methods.
  • the system and method of the present disclosure provide improved non-invasive physiological monitoring by leveraging analysis of dynamic biophotonic signals and advanced machine learning, well suited for continuous health tracking, diagnostics, and personalized medicine applications.
  • FIG. 1 is a block diagram illustrating a system for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure.
  • FIG. 2 is an exemplary optical device in accordance with the present disclosure.
  • FIG. 3 is a block diagram of a server of FIG. 1 in accordance with the present disclosure.
  • FIG. 4 is a block diagram of a physiological parameter determining module of FIG. 3 in accordance with the present disclosure.
  • FIGS. 5A and 5B illustrate a method for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure.
  • FIG. 6 is a schematic diagram of a computer architecture for executing the embodiments in accordance with the present disclosure.
  • the present disclosure provides a system and method for non- invasively determining physiological parameters of a subject by analyzing dynamic biophotonic signals obtained from biological tissue using one or more machine learning models.
  • the disclosed system and method addresses the limitations of existing techniques by enabling accurate, high-fidelity measurement of various bio-vitals potentially exceeding established accuracy grades, derived from subtle skin vibrations, regardless of sensor placement on various body locations or a need to sense through thin clothing, generalizing to diverse physiological conditions (like cardiovascular or respiratory diseases) and user demographics, and facilitating machine learning-driven advanced analysis capabilities with normative modeling, longitudinal tracking for monitoring changes over time, predictive/prognostic insights for anticipating health trajectories, and automated signal interpretation including heart sound labeling.
  • a process, a method, a system, a product, or a device that includes a series of steps or units is not necessarily limited to expressly listed steps or units but may include other steps or units that are not expressly listed or that are inherent to such process, method, product, or device.
  • FIGS. 1 through 6 where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
  • FIG. 1 is a block diagram illustrating a system 100 for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure.
  • the system 100 includes an optical device 104, an image-capturing device 106, and a server 108.
  • the server 108 is communicatively connected to the image-capturing device 106.
  • the optical device 104 is configured to emit coherent laser light on one or more body regions of a subject.
  • the optical device 104 is configured to emit coherent laser light or other structured light onto one or more body regions of a subject 102, including but not limited to head, neck, chest, back, stomach, hand, leg area, or other body regions.
  • Coherent laser light refers to electromagnetic radiation emitted by a laser source in which the light waves maintain a constant phase relationship over time and space.
  • the optical device 104 may include one or more laser diodes, collimating optics, and beam-shaping components to produce a controlled and uniform illumination field.
  • the coherent light source utilizes wavelengths ranging from 400 nanometres (nm) to 2500 nm for optimal analysis, emitting a power output between 0.1 milliwatts (mW) and 5 mW.
  • the optical device 104 may be integrated with wavelength control mechanisms for wavelength-specific tissue penetration or speckle pattern formation to facilitate motion capture through reflected light intensity variations.
  • the optical device 104 emits the laser light through an optical amplification, based on stimulated emission of electromagnetic radiation which means that emitted photons (light particles) have same frequency and phase, traveling in the same direction and maintaining a consistent wavelength. This coherence enables the laser light to be highly absorbed, creating a tight beam with minimal divergence.
  • the coherent laser light is directed towards the user's skin, either directly or through clothing.
  • the optical device 104 may be a handheld device, a mobile phone, a Kindle, a Personal Digital Assistant (PDA), a tablet, a music player, a computer, a laptop, an electronic notebook, or a Smartphone.
  • the optical device 104 may be wearable.
  • the image capturing device 106 is configured to detect and capture the light reflected from the illuminated body region.
  • the image-capturing device 106 may include a Complementary Metal-Oxide-Semiconductor (CMOS) camera, a Charge-Coupled Device (CCD) camera, a mouse optical sensor, a Raspberry Pi camera, an infrared (IR) camera, a smartphone camera, or a virtual reality device camera, and may operate in visible, nearinfrared, or multispectral imaging modes.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • CCD Charge-Coupled Device
  • IR infrared
  • smartphone camera a smartphone camera
  • virtual reality device camera may operate in visible, nearinfrared, or multispectral imaging modes.
  • the image capturing device 104B may be a high-frequency camera.
  • the image capturing device 104B may be handheld, a camera, an infrared (IR) camera, a smartphone, a mobile phone, a virtual reality device, or any kind of imaging-capturing device.
  • the image-capturing device 106 captures the temporal and spatial variations in the reflected light as a sequence of image frames, forming either a video stream or a set of reflected light images.
  • the variations in the captured images represent minute motion or vibration patterns resulting from physiological events, such as cardiac pulses, respiratory cycles, and vascular micro-movements. For instance, the vibrations are generated by mechanical contractions of a heart muscle, opening and closing of heart valves, and laminar and turbulent blood flow within the cardiovascular in the subject 102.
  • the image-capturing device 106 acquires the reflected light at a high sampling frequency of at least 600 Hz to 1.2 kilohertz (k Hz), exceeding 1.6 kHz. In some embodiments, the image-capturing device 106 acquires the reflected light at sampling frequency, typically at least 20 Hz and often exceeding 200 Hz.
  • the data characterized reflected light may be a reflected light image or a video of the subject 102 that is recorded by the image capturing device 104B.
  • the data characterized reflected light may be an MPEG-4 Part 14 (MP4) format file or a numerical array.
  • the data characterized reflected light comprises low and high frequency components. More particularly, the data characterized reflected light encodes information about skin vibrations caused by physiological activity associated with the subject 102.
  • the vibration data captured from a subject's body varies significantly depending on body region being monitored, as each region reflects distinct physiological activities with unique signal characteristics. For example, in a neck region (jugular and carotid area), vibrations primarily represent jugular venous and carotid artery pulsations, showing moderate amplitude and high-frequency components related to cardiac and respiratory motion, which can be used to extract parameters like blood flow velocity and heart sounds. In a chest region, especially the precordial area, the signals are rich in low- to mid-frequency components and higher in amplitude, capturing mechanical heart sounds (SI, S2, murmurs) and respiratory vibrations, useful for determining heart and respiratory rates.
  • SI mechanical heart sounds
  • the abdominal region records irregular, low-frequency, burst-like patterns caused by gastrointestinal motility and respiratory-induced abdominal wall movements, while peripheral limb regions capture low to moderate amplitude vibrations from arterial pulsations and muscular micro-movements, aiding in the assessment of peripheral pulse velocity and neuromuscular activity.
  • the system 100 incorporates region- aware preprocessing and analysis techniques such as adjusting filter settings, feature extraction parameters, and applying region-specific machine learning models, thereby enabling accurate interpretation of physiological parameters from the corresponding vibration data.
  • the image capturing device 106 may include supplementary components.
  • a lens and/or filter assembly may be employed in conjunction with a sensor module of the image capturing device 106 to optimize light capture, potentially focus the reflected light, and filter out unwanted ambient light using techniques like bandpass filtering.
  • Motion sensors such as accelerometers and gyroscopes, may be included to detect user movement, allowing a software system to compensate for motion artifacts in the captured data.
  • a communication module such as Bluetooth or Wi-Fi, can facilitate wireless data transmission between the image capturing device 106 and external devices or networks, such as cloud servers or Electronic Health Record (EHR) systems.
  • EHR Electronic Health Record
  • a microcontroller or similar processing unit manages the operation of the hardware components, facilitates data transfer, synchronizes data streams, and may perform on-edge computing tasks, including initial data filtering, pre-processing, and data anonymization, governed by firmware/middleware.
  • the optical device 104 and the image capturing device 106 are integrated into a single unit.
  • the integrated unit facilitates precise optical alignment, reduces system footprint, and improves portability and ease of use for non-invasive physiological monitoring.
  • the integrated unit may include a shared housing that encapsulates the coherent light source and the image sensor, along with necessary optical elements such as lenses, mirrors, filters, and beam shapers. This arrangement supports synchronized light emission and image acquisition, allowing for real-time collection of reflected light signals from the subject’s body region.
  • the integrated unit may also contain embedded electronics for on-board pre-processing, power regulation, and wireless communication with the server 10.
  • the integrated unit can be configured as a standalone wearable device worn directly on the user's body, such as an armband, a wrist-worn device (like a watch or bracelet), a finger-worn device (like a ring), or an ear-worn device (like an earplug or integrated into a hearing aid/headphone).
  • the coherent laser source and image sensor modules can be integrated into various existing wearable items, including but not limited to headbands, straps, ankle bracelets, helmets, chokers, glasses, garments (shirts, bras, underpants, gloves, shoes), wearable patches adhering to the skin, or other wearable medical devices.
  • the module can be integrated into non-wearable devices where the laser is positioned to interact with the user's body, directly or through clothing. Examples include integration into other medical or wellness devices, fitness equipment, mobile phones, smart mirrors, bed sensors, toilets or toilet seats, chairs, tables, doors, car components, or even a computer mouse. Multiple such apparatuses or integrated modules may be deployed simultaneously on a single user or across multiple users, synchronized to capture data from various body locations concurrently, providing a comprehensive physiological assessment.
  • the disclosed embodiments are exemplary, and the system can be adapted to numerous other form factors and integration scenarios.
  • the optical device 104 and the image capturing device 106 are separate units, thereby enabling flexibility in positioning and targeting, enabling the system to adapt to different use cases or subject anatomies.
  • the optical device 104 may be placed at a fixed angle relative to the subject’s body, while the image-capturing device 106 is positioned independently to optimize the viewing angle, field of view, or minimize specular reflections.
  • the separation also allows for customizable baselines in stereo or multi-angle setups for enhanced depth resolution or motion triangulation.
  • synchronization between the optical and imaging devices may be achieved via wired or wireless signaling protocols, ensuring temporal coherence between emitted and reflected light frames.
  • Both configurations such as integrated unit and separate units may include calibration procedures to account for environmental variables such as ambient lighting, distance variations, or motion artifacts.
  • the server 110 is communicatively coupled with the optical device 104 and the image capturing device 106 via a network 108.
  • the network 108 may be a wireless network, a wired network, a combination of a wireless network and a wired network, or an Internet.
  • the server 110 includes one or more processors and memory storing computer- readable instructions.
  • the instructions cause the processor to obtain, from the image-capturing device 106, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantify, using a motion description model, motion in consecutive frames of the reflected light images or the videos; convert data characterizing quantified motion into a time series vibration data; apply one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof.
  • the server 110 comprises one or more first machine learning models 112A, one or more second machine learning models 112B, one or more third machine learning models 112C, and one or more fourth machine learning models 112D.
  • the processor is configured to determine the heart sound, the respiratory sound, the physiological velocity data using one or more first machine learning models 112A.
  • the processor is configured to reconstruct, using one or more second machine learning models 112B, the ECG signal based on the one or more vibration segments.
  • the processor is configured to determine, using the one or more third machine learning models 112C, the respiration rate of the subject based on the one or more vibration segments.
  • the processor is configured to determine, using the one or more fourth machine learning models 112D, the heart rate of the subject based on the one or more vibration segments.
  • the processor is configured to determine a health condition of the subject based on determined physiological parameters.
  • the processor may analyze one or more combinations of physiological parameters including, but not limited to, heart sound, respiratory sound, other physiological velocity data, electrocardiogram (ECG) signal, heart rate, respiration rate, or combinations thereof.
  • ECG electrocardiogram
  • the processor may compare the determined parameters against pre-defined clinical thresholds, baseline measurements, or population-level statistical models to identify abnormal patterns indicative of potential health conditions.
  • the processor employs one or more machine learning models, such as decision trees, support vector machines, deep neural networks, or ensemble models, trained on labeled physiological datasets to classify the subject's physiological state into one or more health condition categories.
  • the health condition may include cardiovascular disorders (e.g., heart valve diseases, coronary artery diseases, heart failure, arrhythmias), respiratory conditions (e.g., apnea, dyspnea, bronchial obstruction), circulatory anomalies (e.g., peripheral vascular disease), or neuromuscular abnormalities (e.g., irregular thoracic motion).
  • cardiovascular disorders e.g., heart valve diseases, coronary artery diseases, heart failure, arrhythmias
  • respiratory conditions e.g., apnea, dyspnea, bronchial obstruction
  • circulatory anomalies e.g., peripheral vascular disease
  • neuromuscular abnormalities e.g., irregular thoracic motion
  • the system may further incorporate longitudinal monitoring data, previous health records, or subject-specific reference parameters to improve the specificity and sensitivity of the health condition detection. Alerts or diagnostic flags may be generated automatically and communicated to clinicians or caretakers through an integrated interface or external device.
  • FIG. 2 is an exemplary optical device 200 in accordance with the present disclosure. It is to be noted that the optical device 200 is for exemplary purpose only and that various modifications and alternative configurations may be employed without departing from the scope of the present disclosure.
  • the optical device 200 comprises a coherent light source, such as a laser diode or vertical-cavity surface-emitting laser (VCSEL), configured to emit light of a specified wavelength and coherence suitable for detecting micro-vibrations from a surface of the subject’s body.
  • the optical device 200 may include collimating and focusing optics, beam-shaping elements, and optical isolators to maintain beam quality and reduce back reflections.
  • the optical device 200 further comprises one or more beam steering or scanning modules, such as galvanometric mirrors, MEMS-based scanning units, or optical prisms, to dynamically direct the light beam across specific regions of interest on the subject’s body (e.g., neck, chest, or abdomen).
  • the optical device 104 may include polarization controllers or filters to enhance signal specificity based on the reflective properties of the skin and subcutaneous tissue.
  • the optical device 200 may be configured to operate in continuous wave (CW) or pulsed modes depending on a desired temporal resolution, safety thresholds, and power consumption requirements.
  • the optical device 200 may be integrated with photodetectors or optoelectronic receivers to capture backscattered or reflected light, which is then routed to the image-capturing device or directly processed for motion analysis.
  • FIG. 3 is a block diagram of the server 108 of FIG. 1 in accordance with the present disclosure.
  • the server 108 includes a database 300, an input receiving module 302, a motion description module 304, a conversion module 306, a filtering module 308, a segmenting and labelling module 310, a physiological parameter determining module 312, and a health condition determining module 314. It is to be understood that the delineation of these modules is for illustrative purposes only. In some embodiments, one or more of the modules may be combined into a single module, or subdivided into multiple sub-modules, depending on implementation requirements, computational architecture, or software design preferences. Additionally, the functionalities described herein may be implemented using a combination of hardware, software, firmware, or any suitable processing logic.
  • the database 300 stores raw and pre-processed reflected light data, motion descriptors, time-series vibration signals, segmented data, extracted features, intermediate and final physiological parameters, reconstructed ECG signals, and health condition classification results.
  • the database 300 may also maintain subject-specific data, such as baseline physiological measurements, historical health records, longitudinal monitoring data, and metadata associated with the imaging sessions (e.g., time, date, sensor configuration, or environmental conditions).
  • Longitudinal monitoring data refers to physiological or health-related data that is collected from the same subject over an extended period of time, often at multiple time points.
  • the database 300 further includes labeled datasets used to train, validate, and test the one or more machine learning models employed within the system 100.
  • the data may include ground-truth physiological parameters obtained via reference-grade medical devices (e.g., ECG machines, spirometers) used during the model training process.
  • the database 300 may be implemented using structured query language (SQL) or NoSQL-based storage systems and can reside on a local server, cloud infrastructure, or a distributed architecture. Access to the database 300 may be governed by encryption protocols and role-based access control to ensure data privacy, compliance with health data regulations (e.g., HIPAA or GDPR), and secure integration with clinical systems, where applicable.
  • the input receiving module 302 is configured to receive data characterizing reflected light, from the body regions of the subject 102, from various input sources, including the image-capturing device 106, the optical device 104, and any additional external sensors or devices that provide bio-photonic signals.
  • the input receiving module 302 processes the incoming data streams, converting them into a standardized format suitable for subsequent analysis by the other modules within the system 100.
  • the input receiving module 302 may also handle data from external sources such as patient records, previous health measurements, or additional sensor inputs (e.g., temperature sensors, motion detectors, etc.).
  • the input receiving module 302 ensures that all data is time-synchronized and tagged with relevant metadata, such as subject identifiers, time stamps, and session details, enabling efficient storage and retrieval for further processing.
  • the input receiving module 302 may incorporate errorchecking routines to ensure the integrity of the incoming data, discarding corrupted or incomplete signals to ensure only high-quality data is passed on to the next stages of processing.
  • the motion description module 304 is configured to quantify motion existing in consecutive frames of the reflected light video or image of the subject 102 using motion description algorithms.
  • the motion description algorithms analyse changes in pixel intensities in between frames of the reflected light video to quantify motion.
  • the module may employ techniques such as optical flow, frame differencing, phase-based motion estimation, or block matching to capture micro-movements on the tissue surface that are reflective of underlying physiological activity.
  • the motion descriptors may represent velocity vectors, displacement fields, or energy distributions, and serve as a basis for generating vibration signals corresponding to physiological processes.
  • the motion description module 304 may further apply spatial averaging, frame summation, or dimensionality reduction techniques (e.g., PCA) to enhance signal -to-noise ratio and reduce computational complexity.
  • PCA dimensionality reduction
  • the conversion module 306 is configured to receive at least one of the data characterizing quantified motion and convert into a time series vibration data by integrating total energy distribution in the reflected light across the consecutive frames in the video.
  • the integration of the total energy distribution in the reflected light includes aggregating intensity values of pixels across the consecutive frames (i.e., time series data).
  • the time series vibration data represent dynamics of the reflected light over time.
  • the time series data include features of the reflected light relevant to physiological events.
  • the filtering module 308 is configured to apply one or more bandpass filters on the time series vibration data to isolate frequency components of interest related to the one or more physiological parameters, and normalize an amplitude of the time series data for scaling the time series data to a range, for e.g., [-1, 1], or standardizing the time series vibration data to have zero mean and unit variance.
  • the bandpass filter selectively passes the frequency components pertinent to the the one or more physiological parameters by allowing the frequency components within a range, for e.g., 20 hertz (Hz) to 750 Hz to pass through.
  • the bandpass filter eliminates frequency components outside the range.
  • multiple bandpass filters may be applied in parallel to extract distinct bands corresponding to different physiological sources, such as low-frequency bands for respiratory activity (e.g., 0.1 Hz to 0.5 Hz), mid-frequency bands for heart sounds (e.g., 20 Hz to 150 Hz), and high-frequency components for vascular or muscular microvibrations (e.g., 150 Hz to 750 Hz).
  • the filtering module 308 may also implement adaptive filtering strategies where filter parameters are dynamically adjusted based on signal characteristics, subject-specific baselines, or known noise patterns. Additionally, artifact suppression techniques, such as notch filtering (e.g., at 50/60 Hz) or empirical mode decomposition (EMD), may be employed to further enhance signal quality.
  • the output of the filtering module 308 is a clean, frequency-isolated, and amplitude-normalized time-series signal, which is then forwarded to the segmenting and labelling module 310 for further processing.
  • the segmenting and labelling module 310 is configured to divide the processed time series vibration data into equal or variable-length segments or epochs.
  • the segmenting module 310 selects the variable-length segments for duration, for e.g., 5-10 seconds based on the analysis requirements, desired temporal resolution, or physiological event cycles (such as cardiac or respiratory cycles).
  • physiological event cycles such as cardiac or respiratory cycles.
  • the segmentation process is guided by heuristic rules, statistical change point detection, signal energy thresholds, or machine learning-based dynamic windowing techniques that account for the variability in signal patterns and subject-specific rhythms. For instance, higher energy regions may indicate cardiac events, while lower frequency periodic patterns may be aligned with respiratory cycles.
  • segmenting and labelling module 310 may utilize overlap-based windowing to preserve continuity between segments, improving robustness in downstream analysis.
  • the segments are further labelled or annotated with frequency band classifications (e.g., low-frequency or high-frequency components), which can aid in feature extraction and classification stages.
  • frequency band classifications e.g., low-frequency or high-frequency components
  • the segmented and labelled time series data is then passed to the physiological parameter determining module 312 for extracting relevant features and determining physiological parameters using trained machine learning model
  • the physiological parameter determining module 312 is configured to determine, using one or more statistical models and/or machine learning models, at least one physiological parameter by analyzing the one or more vibration segments labelled with frequency components.
  • the physiological parameter includes heart sound segments, respiratory sound segments, physiological velocity data segments, electrocardiogram (ECG) signal, heart rate, respiration rate, or combinations thereof.
  • ECG electrocardiogram
  • different machine learning models may be specialized for different physiological parameters. For instance, the first machine learning models 112A may classify segments as heart or respiratory sounds, the second machine learning models 112B may reconstruct ECG-like signals based on temporal and spectral features extracted from vibration data, while the third machine learning models 112C may estimate the respiration rate and the third machine learning models 112D may estimate the heart rate.
  • the classification into first, second, third, and fourth models is made for differentiation and explanation purposes only; in practice, a single machine learning model may be capable of estimating multiple physiological parameters, or the system may utilize a combination of multiple specialized models operating in parallel or in sequence.
  • the one or more machine learning models may also incorporate patient-specific information, such as previous medical records, clinical history, or demographic data (e.g., age, sex, weight, pre-existing conditions), to enable personalized analysis and improve the accuracy and reliability of physiological parameter estimation.
  • the machine learning models are typically deep learning models such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs).
  • CNNs convolutional neural networks
  • RNNs recurrent neural networks
  • the health condition determining module 314 is configured to determine the health condition of the subject based on determined physiological parameters.
  • the health condition determining module 314 may analyze one or more combinations of physiological parameters including, but not limited to, heart sound, respiratory sound, physiological velocity data, electrocardiogram (ECG) signal, heart rate, respiration rate, or combinations thereof.
  • ECG electrocardiogram
  • the health condition determining module 314 may compare the determined parameters against pre-defined clinical thresholds, baseline measurements, or populationlevel statistical models to identify abnormal patterns indicative of potential health conditions.
  • the health condition determining module 314 employs one or more machine learning models, such as decision trees, support vector machines, deep neural networks, or ensemble models, trained on labeled physiological datasets to classify the subject's physiological state into one or more health condition categories.
  • the health condition may include cardiovascular disorders (e.g., heart valve diseases, coronary artery diseases, heart failure, arrhythmias), respiratory conditions (e.g., apnea, dyspnea, bronchial obstruction), circulatory anomalies (e.g., peripheral vascular disease), or neuromuscular abnormalities (e.g., irregular thoracic motion).
  • cardiovascular disorders e.g., heart valve diseases, coronary artery diseases, heart failure, arrhythmias
  • respiratory conditions e.g., apnea, dyspnea, bronchial obstruction
  • circulatory anomalies e.g., peripheral vascular disease
  • neuromuscular abnormalities e.g., irregular thoracic motion
  • FIG. 4 is a block diagram of the physiological parameter determining module 312 of FIG. 3 in accordance with the present disclosure.
  • the physiological parameter determining module 312 includes a feature extraction module 402, a feature selection module 404, a heart sound determination module 406, a respiratory sound determination module 408, a physiological velocity data determination module 410, an electrocardiogram (ECG) signal reconstruction module 412, a respiration rate determination module 414, and a a heart rate determination module 416.
  • ECG electrocardiogram
  • one or more of these modules may be combined into a single module, or alternatively, further subdivided into submodules, depending on system architecture, processing requirements, or specific deployment scenarios.
  • the feature extraction module 402 is configured to extract statistical features, complex features or combined features, non-linear entropy features, Independent Component Analysis (ICA) features, and wavelet-based features from the one or more vibration segments labelled with low and high frequency components.
  • the statistical features include information on the distribution and shape of the vibration segments.
  • the statistical features include mean, median, variance, standard deviation, skewness, or kurtosis.
  • the combined features include information on higher-order characteristics of the one or more vibration segments labelled with low and high frequency components.
  • the combined features includePeak-Peak mean, mean square value, Hjorth parameter activity, Hjorth parameter mobility, Hjorth parameter complexity, maximum Power Spectral Frequency, Maximum Power Spectral Density (PSD), or power sum.
  • the non-linear entropy features provide information about the irregularity, complexity, and predictability of heart sound signals or ECG signals in the one or more vibration segments labelled with low and high frequency components.
  • the non-linear entropy features are Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, CO complexity, correlation dimension, Lyapunov Exponent, Permutation entropy, or spectral entropy.
  • the ICA features and the wavelet-based features include additional information about the heart sound data
  • the feature selection module 404 is configured to select relevant features from the statistical features, the complex features or combined features, the non-linear entropy features, the ICA features, and the wavelet-based features using a feature selection technique.
  • the feature selection technique includes low variance filters, high correlation filters, random forests, and forward feature selection, feature selection module 404 selects the relevant features from the statistical features, the complex features or combined features, the non-linear entropy features, the ICA features, and the wavelet-based features based on one or more covariates of the subject 102.
  • the one or more covariates may include gender, age, or Body Mass Index (BMI) of each subject.
  • the feature selection technique analyses the one or more vibration segments labelled with low and high frequency components to extract the features of the cardiovascular conditions . If the features in the one or more vibration segments labelled with low and high frequency components are related to abnormal heart rhythms or murmurs, the feature selection technique assigns higher weights, since abnormal heart rhythms or murmurs are directly related to cardiovascular conditions. If features in the one or more vibration segments labelled with low and high frequency components are related to background noise or irrelevant physiological parameters, the feature selection technique assigns lower weights or scores.
  • the heart sound determination module 406 is configured to classify the low- and high-frequency components of the vibration segments into cardiac-related acoustic events, including heart sounds (SI and S2) and murmurs (S3 and S4).
  • the SI and S2 heart sounds are primarily generated by the closure of the atrioventricular (mitral and tricuspid) and semilunar (aortic and pulmonary) valves, respectively, during the cardiac cycle.
  • the S3 and S4 sounds often categorized as murmurs or additional heart sounds, may be associated with ventricular filling dynamics and can be indicative of underlying cardiovascular abnormalities, although they are not always pathological.
  • the heart sound determination module 406 processes the time-series vibration data by analyzing frequency patterns and temporal features that correspond to turbulent blood flow, often caused by valvular insufficiency, stenosis, or the interaction of blood with arterial plaques or cholesterol deposits. These acoustic signatures help in identifying potential cardiovascular issues with increased diagnostic granularity.
  • the heart sound determination module 406 determines the heart sound using the one or more first machine learning models 112A.
  • the one or more first machine learning models 112A are trained using a training dataset comprising labelled timeseries segments, each segment annotated with corresponding ground truth information indicative of a heart sound class, including SI, S2, murmur, or abnormal heart sound.
  • the respiratory sound determination module 408 is configured to classify the low- and high-frequency components of the vibration segments into respiratory sound.
  • the respiratory sound determination module 408 utilizes the one or more first machine learning models 112A trained on annotated datasets to distinguish between various types of respiratory sounds such as normal breath sounds, wheezes, crackles (rales), stridor, and rhonchi, based on frequency content, temporal patterns, and amplitude characteristics.
  • the low-frequency components typically correspond to normal breathing patterns and broad airflow movements, while high-frequency components are often indicative of pathological respiratory conditions, such as airway obstructions, fluid accumulation, or restrictive lung diseases.
  • the respiratory sound determination module 408 may employ feature extraction techniques (e.g., spectral entropy, wavelet features, and time-domain descriptors) to capture key characteristics of respiratory acoustics and apply classification models (e.g., convolutional neural networks, support vector machines, or ensemble models) to label each segment accordingly.
  • feature extraction techniques e.g., spectral entropy, wavelet features, and time-domain descriptors
  • classification models e.g., convolutional neural networks, support vector machines, or ensemble models
  • the respiratory sound determination module 408 may also analyze temporal consistency, duration, and bilateral sound symmetry to enhance diagnostic precision and differentiate between upper and lower respiratory tract anomalies.
  • the physiological velocity data determination module 410 is configured to analyze the vibration segments and classify them into physiological velocity data, which includes, but not limited to, blood flow velocity, respiratory airflow velocity, or thoracic motion velocity. Using the first machine learning models 112A, the physiological velocity data determination module 410 processes the low- and high-frequency components of the vibration data to estimate hemodynamic parameters (e.g., cardiac output, peripheral circulation speed), respiratory airflow rates, or thoracic wall motion associated with chest expansion and contraction during breathing. For instance, blood flow velocity can be inferred from changes in vibrational frequency corresponding to arterial pulse waves or blood flow turbulence, while respiratory airflow velocity is derived from air movement dynamics within the tracheobronchial tree and alveolar regions.
  • hemodynamic parameters e.g., cardiac output, peripheral circulation speed
  • respiratory airflow rates e.g., thoracic wall motion associated with chest expansion and contraction during breathing.
  • blood flow velocity can be inferred from changes in vibrational frequency corresponding to arterial pulse waves or blood flow turbulence, while
  • the physiological velocity data determination module 410 integrates longitudinal data and subject-specific information (e.g., age, gender, health conditions) to provide personalized velocity metrics, which may help detect abnormalities in circulatory health or lung function.
  • the processed physiological velocity data is then used to inform further diagnostic or therapeutic decisions.
  • the first machine learning models 112A may be trained with a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative physiological velocity value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
  • the electrocardiogram (ECG) signal reconstruction module 412 is configured to reconstruct, using one or more second machine learning models 112B, the ECG signal based on the one or more vibration segments labelled with frequency components.
  • the one or more second machine learning models 112B are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region, and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals, wherein reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform.
  • the second machine learning models 112B are trained to capture both the temporal and morphological characteristics of the ECG waveform.
  • the second machine learning models 112B learns to reconstruct P-waves, QRS complexes, and T-waves, as well as the overall rhythm and heart rate information from the vibration-based features, which reflect the cardiac electrical activity as captured in the reflected light data.
  • the reconstructed ECG signal replicates temporal dynamics (e.g., the intervals between successive heartbeats) and morphological features (e.g., the amplitude and shape of the P, QRS, and T waves) of a physiological ECG waveform. This process allows for the non-invasive estimation of cardiac activity in subjects where direct electrode-based ECG recording is not feasible or preferred.
  • the system can adapt to individual subject characteristics, such as anatomical variations, cardiovascular health conditions, and motion artifacts in the data, ensuring accurate and personalized ECG reconstruction.
  • the reconstructed ECG signals can then be used to perform further diagnostic assessments such as heart rate variability analysis, arrhythmia detection, or assessment of cardiac health in conjunction with other physiological parameters.
  • ECG signal obtained via electrode based method is processed and compared with the reconstructed ECG signal.
  • filtering is performed to remove noise and artifacts from the reconstructed ECG signal. This may include low-pass filtering to eliminate high-frequency noise, high-pass filtering to remove baseline wander, bandpass filtering to isolate the relevant frequency range, and notch filtering to remove power line interference.
  • the reconstructed ECG signal undergoes normalization, where it is standardized to have zero mean and unit variance, ensuring consistent signal amplitude across recordings. Noise correction is then applied to further refine the ECG signal.
  • automated noise correction and segment removal are performed.
  • the cycle detection and windowing process begins by identifying R-peaks using algorithms such as wavelet transform or convolutional neural networks (CNNs). After detecting the peaks, the ECG signal is divided into windows centered around each R-peak, with the window size adjusted to include one complete cycle.
  • the template cycle creation process involves selecting middle cycles from the recording, assuming that these cycles are more stable and representative of the overall signal. A mean cycle is then computed from these middle cycles to create an ideal cycle template for comparison. Matched filtering is then employed to identify the best cycles by calculating the correlation coefficient between each cycle and the template cycle.
  • Cycles with a correlation coefficient below a threshold are considered too noisy and discarded, ensuring only consistent and high-quality cycles are retained for further analysis.
  • the indices of these discarded cycles are also tracked to ensure they are excluded from other related data, such as phonocardiogram (PCG) data, guaranteeing the integrity of the final dataset used for ECG reconstruction validation.
  • PCG phonocardiogram
  • the ECG signal reconstruction module 412 measures an accuracy of the P-wave, the QRS complex, and the T-wave durations in the reconstructed ECG signal by comparing the P-wave, the QRS complex, and the T-wave durations in the reconstructed ECG with historical ECG data.
  • the ECG signal reconstruction module 412 generates a similarity score for the reconstructed ECG signal, such as P-Q interval, QRS complex, S-T segment, and T wave using cross-correlation or waveform similarity measures, (e.g., Cosine Similarity).
  • the similarity score may be low or high.
  • the high similarity score indicates that the reconstructed ECG signal replicates shape and timing of critical segments in the historic ECG data.
  • the ECG signal reconstruction module 412 determines how the reconstructed ECG signal captures and preserves the irregular heart rhythms or arrhythmias exhibited in the historic ECG data.
  • the ECG signal reconstruction module 412 measures heights or amplitudes of peaks (P, Q, R, S, T) in a reconstructed ECG signal with the historic ECG data.
  • the peaks represent various phases of the heart's electrical activity.
  • the ECG signal reconstruction module 412 assesses how the reconstructed ECG signal matches the historic ECG data in terms of the amplitudes of the peaks.
  • the ECG signal reconstruction module 412 assesses how accurately peaks, such as the R-R interval (the time between consecutive R waves), are timed in the reconstructed ECG signal compared to the historic ECG data.
  • the ECG signal reconstruction module 412 evaluates sensitivity and specificity of peak detection (during identifying the R-peaks) in the reconstructed ECG signal relative to the historic ECG data.
  • the sensitivity ensures all significant peaks in the reconstructed ECG signal are detected, and no important peaks are missed.
  • the specificity ensures that there are no false peaks are introduced during the detection process and genuine peaks are identified.
  • the ECG signal reconstruction module 412 measures variability in all RR intervals (consecutive heartbeats) in the ECG data using a Deviation of NN intervals (SDNN) metric.
  • SDNN compares SDNN values between the reconstructed ECG signal and the historic ECG data that indicates an accuracy overall heart rate variability.
  • the server 110 measures the Root Mean Square of Successive Differences (RMSSD) that reflects heart rate variability(HRV), (i.e.,) the variability in the interval between adjacent heartbeats.
  • RMSSD Root Mean Square of Successive Differences
  • the ECG signal reconstruction module 412 measures a power of the frequency component in a range of 0.04 to 0.15 Hz using a Low Frequency (LF) metric.
  • the power of the frequency component reflects both sympathetic and parasympathetic activity on heart rate variability.
  • the server 110 measures the power of the frequency component in a range of 0.15 to 0.4 Hz using High Frequency (HF) metric.
  • the power of the frequency component reflects parasympathetic activity, particularly respiratory sinus arrhythmia.
  • the ECG signal reconstruction module 412 calculates a ratio between the HF and the LF.
  • the LF/HF ratio reflects the balance between sympathetic and parasympathetic nervous activities.
  • the LF/HF ratio may be high or low. The higher LF/HF ratio indicates greater sympathetic dominance.
  • the lower LF/HF ratio indicates increased parasympathetic activity relative to sympathetic activity.
  • the ECG signal reconstruction module 412 compares a Power Spectral Density (PSD) of the reconstructed ECG signal and the historic ECG data that indicates how the reconstruction captures dynamic range and frequency characteristics of the ECG signal.
  • PSD Power Spectral Density
  • the ECG signal reconstruction module 412 identifies and compares frequency components (e.g., the dominant frequencies within the LF and HF bands) may indicate how the reconstructed ECG signal matches the historic ECG data in terms of spectral content.
  • the ECG signal reconstruction module 412 compares total power between the reconstructed ECG signals and the historic ECG data that is utilized to determine an overall energy captured by the reconstruction process.
  • the total power represents the sum of powers across all frequency components in the reconstructed ECG signal.
  • the ECG signal reconstruction module 412 quantifies complexity or regularity of the ECG signal using spectral entropy.
  • the ECG signal reconstruction module 412 compares the complexity (murmurs) or regularity (heart sounds) of the ECG signal with the spectral entropy that indicates how the reconstructed signal is close to the complexity of the heartbeat.
  • the ECG signal reconstruction module 412 provides information about an assessment of various waves in the ECG signal, including P wave (atrial depolarization), Q wave, R wave, S wave (ventricular depolarization), T wave (ventricular repolarization), and U wave.
  • the ECG signal reconstruction module 412 provides information about measurement of different intervals in the ECG signal, such as PR interval (atrioventricular conduction time), QRS interval (ventricular depolarization time), QT interval (total ventricular activity), and RR interval (heart rate).
  • PR interval atrioventricular conduction time
  • QRS interval ventricular depolarization time
  • QT interval total ventricular activity
  • RR interval heart rate
  • the ECG signal reconstruction module 412 provides information about the segments between the waves, including PR segment (atrial repolarization), ST segment (early ventricular repolarization), and TP segment (ventricular repolarization complete to next depolarization).
  • the ECG signal reconstruction module 412 provides information about the QRS complex (ventricular depolarization).
  • the ECG signal reconstruction module 412 provides information about the characterization of arrhythmias, including abnormalities in the regularity or pattern of the heart rhythm.
  • the pattern of the heart rhythm includes conditions such as atrial fibrillation, bradycardia, tachycardia, or premature ventricular contractions (PVCs).
  • the ECG signal reconstruction module 412 provides information about the calculation of the heart rate based on various intervals, such as the SI -SI interval (time between sequential heartbeats).
  • the respiration rate determination module 414 is configured to the respiration rate of the subject based on the one or more vibration segments labelled with frequency components, using the one or more third machine learning models 112C.
  • the one or more third machine learning models 112C are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity, and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments.
  • the third machine learning models 112C analyse the selected features time interval between successive peaks or the dominant frequency components within the respiratory band (e.g., 0.1-0.5 Hz), which are indicative of the subject’s respiration rate.
  • adaptive learning techniques may be employed to account for variations due to posture, activity, or individual physiological differences.
  • the respiration rate determination module 414 may also utilize ensemble learning or hybrid architectures that combine convolutional neural networks (CNNs) for spatial feature extraction and recurrent neural networks (RNNs), such as long short-term memory (LSTM) networks, to model the temporal dynamics of respiratory patterns.
  • CNNs convolutional neural networks
  • RNNs recurrent neural networks
  • LSTM long short-term memory
  • the module may incorporate subject-specific parameters, demographic data, and longitudinal respiration trends to enhance personalization and prediction robustness, thereby ensuring accurate and continuous estimation of respiration rate in real-time or near real-time conditions.
  • the heart rate determination module 416 is configured to the heart rate of the subject based on the one or more vibration segments labelled with frequency components, using the one or more fourth machine learning models 112D.
  • the one or more fourth machine learning models 112D are trained using labeled datasets comprising: vibrationbased time-series signals corresponding to cardiac-induced motion or vibrations from the subject’s body, and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
  • the vibration segments may be preprocessed to enhance heartbeat-related features by applying bandpass filters targeting the heart rate frequency range (typically 0.8-20 Hz), followed by normalization to reduce amplitude variability across subjects or sessions.
  • FIGS. 5A and 5B illustrate a method for non- invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure.
  • the descriptions of the hardware elements such as the optical device, image-capturing device, and processing units previously outlined in FIGS. 1-4 are not repeated herein. Rather, FIGS. 5A and 5B focus on the procedural flow of operations carried out by the system components for signal acquisition, processing, feature extraction, machine learning-based analysis, and physiological parameter determination. The method can be implemented in real-time or offline modes depending on the system configuration and application requirements.
  • the method can be implemented in real-time or offline modes depending on the system configuration and application requirements.
  • the method includes emitting, using the optical device 104, coherent laser light on one or more body regions of a subject.
  • the method includes capturing, using the image-capturing device 106, reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject.
  • the method includes obtaining, from the image-capturing device 106, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof.
  • the method includes quantifying, using the motion description model, motion in consecutive frames of the reflected light images or the videos.
  • the method includes converting data characterizing quantified motion into a time series vibration data.
  • the method includes applying one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters and optionally further segment filtered time-series vibration data into one or more vibration segments, the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof.
  • FIG. 6 is a schematic diagram of a computer architecture 600 for executing the embodiments in accordance with the present disclosure.
  • This schematic drawing illustrates a hardware or computure configuration of a server 110/computer system/ computing device in accordance with the embodiments herein.
  • the server 110 comprises the computer architecture 600 for executing one or more functions in determining one or more physiological parameters.
  • the computer architecture 600 includes at least one processing device CPU 10 that may be interconnected via system bus 14 to various devices such as a random-access memory (RAM) 12, read-only memory (ROM) 16, and an input/output (I/O) adapter 18.
  • the I/O adapter 18 can connect to peripheral devices, such as disk units 38 and program storage devices 40 that are readable by the system.
  • the system can read the inventive instructions on the program storage devices 40 and follow these instructions to execute the methodology of the embodiments herein.
  • the system further includes a user interface adapter 22 that connects a keyboard 28, mouse 30, speaker 32, microphone 34, and/or other user interface devices such as a touch screen device (not shown) to the bus 14 to gather user input.
  • a communication adapter 20 connects the bus 14 to a data processing network 42
  • a display adapter 24 connects the bus 14 to a display device 26, which provides a graphical user interface (GUI) 36 of the output data in accordance with the embodiments herein, or which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
  • GUI graphical user interface

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Vascular Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Pulmonology (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A system and method for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models, are provided. The system comprises an optical device to emit coherent laser light on body regions of a subject and an image-capturing device to capture reflected light, from the body regions, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject. The system further comprises a server configured to: quantify motion in consecutive frames of reflected light images or the videos; convert data characterizing quantified motion into a time series vibration data; apply bandpass filters on the time-series vibration data to isolate frequency components correspond to physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery sounds or bruits, carotid artery bruits, or combinations thereof; and determine, using statistical models and/or machine learning models, at least one physiological parameter by analyzing the vibration segments.

Description

SYSTEM AND METHOD FOR DETERMINING PHYSIOLOGICAL PARAMETERS OF SUBJECT FROM BIOPHOTONIC SIGNALS USING MACHINE LEARNING
TECHNICAL FIELD
[1] The present disclosure relates generally to physiological monitoring systems and more particularly to non-invasive system and method for determining physiological parameters of a subject from biophotonic signals using machine learning analysis.
BACKGROUND
[2] Monitoring of bio-vitals such as heart sounds, respiratory sounds, heart rate (HR), blood pressure (BP), respiration rate (RR), and oxygen saturation (SpO?) is crucial for health management, disease detection, and optimizing treatments. There is a significant and ongoing trend towards developing non-invasive and convenient methods for physiological monitoring. However, the traditional techniques often necessitate direct contact with the user’s body, posing challenges, particularly in ambulatory settings or for continuous monitoring.
[3] Various technologies are currently employed for physiological assessment. Wearable devices, such as smart watches, fitness bands, patches, and sensor-integrated clothing, commonly incorporate optical sensing techniques. Such optical sensing techniques typically involve illuminating a skin area using light sources such as light-emitting diodes (LEDs), and detecting variations in reflected or scattered light using optical detectors like photodiodes. Techniques such as photoplethysmography (PPG) are frequently used to derive physiological parameters, including heart rate and blood oxygen saturation. Additionally, there is ongoing exploration into integrating multiple types of sensors within a single device or platform to enhance monitoring capabilities.
[4] Furthermore, the application of computational algorithms, including machine learning (ML), deep learning (DL), and artificial intelligence (Al), to analyze data gathered from wearable sensors is an established practice. These algorithms are utilized for diverse functions, such as extracting relevant features from complex time-series sensor data, monitoring physiological signs to detect potential illness or adverse conditions, predicting future health states or outcomes, and sometimes adapting recommendations or alerts based on the collected data. Concepts like analyzing physiological trends over extended periods (longitudinal analysis) and incorporating user-specific information (like demographics) into the analysis are also known areas of investigation. Efforts are continually made to improve the quality and reliability of signals obtained from contact-based devices like stethoscope and wearable sensors.
[5] Despite these ongoing developments, significant challenges and limitations remain in the field of non-invasive, physiological monitoring. Accurately measuring certain key vital signs continuously, conveniently, and without requiring user calibration remains difficult. For example, obtaining reliable, cuffless blood pressure measurements that meet clinical accuracy standards is a well-known challenge for many existing technologies. Additionally, extracting high-fidelity information beyond basic parameters like heart rate - such as detailed acoustic signatures corresponding to heart sounds, respiratory sounds, or indicators of blood flow turbulenceusing non-invasive optical methods presents considerable technical hurdles. Many existing optical techniques may primarily analyze bulk changes in light absorption or scattering, potentially overlooking subtle but information-rich dynamic patterns in the light interacting with the tissue surface caused by physiological processes.
[6] Moreover, ensuring high signal quality and robustness against motion artifacts is a persistent issue for sensors worn during daily activities. Integrating advanced, multiparameter sensing capabilities into comfortable, unobtrusive, and diverse form factors suitable for long-term use also poses design and technical difficulties. While computational analysis is employed, there remains a need for more sophisticated and robust analysis pipelines capable of extracting comprehensive and subtle features from sensor data, performing highly accurate classification or prediction tailored to specific conditions and individual users, and generating genuinely actionable insights for diagnostics, prognostics, or personalized therapeutic guidance.
[7] Therefore, a need persists for improved systems and methods that can accurately and reliably capture a broader range of physiological information non-invasively using versatile form factors, while also providing advanced analytical capabilities to translate this data into meaningful health assessments and interventions.
SUMMARY
[8] The present disclosure relates generally to physiological monitoring and, more particularly, to a system and method for non-invasively determining physiological parameters of a subject by analyzing dynamic biophotonic signals obtained from biological tissue using one or more machine learning models.
[9] It is an object of the present disclosure to provide a biophotonic based physiological monitoring system and method. More particularly, the present disclosure relates to a system and method for providing accurate, non-invasive extraction of a wide range of bio-vitals or physiological parameters, using signal processing and machine learning techniques. Further, the present disclosure relates to a computer program that includes instructions for carrying out the method, when the computer program is executed on a computer system.
[10] According to a first aspect, there is provided a system for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models. The system comprises an optical device configured to emit coherent laser light on one or more body regions of a subject and an image-capturing device configured to capture reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject. The one or more body regions comprise head, neck, chest, back, stomach, hand, leg area, or combinations thereof. The system further comprises a server communicatively connected to the image-capturing device. The server comprises a memory storing a database and a set of modules; and a processor configured to execute the set of modules to: obtain, from the image-capturing device, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantify, using a motion description model, motion in consecutive frames of the reflected light images or the videos; convert data characterizing quantified motion into a time series vibration data; apply one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof
[11] In some embodiments, the processor is configured to determine the heart sound segments, the respiratory sound segments, the physiological velocity segments using one or more first machine learning models. The one or more first machine learning models are trained using a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative of at least one of: (a) a heart sound class, including SI, S2, murmur, or abnormal heart sound; (b) a respiratory sound class, including wheeze, crackle, or normal breath sound; and (c) other physiological velocity segment value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
[12] In some embodiments, the processor is configured to reconstruct, using one or more second machine learning models, the ECG signal based on the from the filtered timeseries vibration data. The one or more second machine learning models are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region, and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals. The reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform.
[13] In some embodiments, the processor is configured to determine, using the one or more third machine learning models, the respiration rate of the subject based on the from the filtered time-series vibration data. The one or more third machine learning models are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity, and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments.
[14] In some embodiments, the processor is configured to determine, using the one or more fourth machine learning models, the heart rate of the subject based on the from the filtered time-series vibration data. The one or more fourth machine learning models are trained using labeled datasets comprising: vibration-based time-series signals corresponding to cardiac-induced motion or vibrations from the subject’s body, and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
[15] In some embodiments, the optical device emits coherent laser light at wavelengths ranging from 400 nanometers (nm) to 2500 nm, with a power output between 0.1 milliwatts (mW) and 5 mW. [16] In some embodiments, the vibration data is acquired at a sampling frequency in a range of approximately 50 hertz (Hz) to at least 400 Hz, and up to 10000 Hz.
[17] In some embodiments, the motion description model employs at least one of an optical flow method, a block matching algorithm, a phase-based method, a gradient-based method or a feature-based method.
[18] In some embodiments, the processor is configured to standardize the time series vibration data to have zero mean and unit variance.
[19] In some embodiments, the processor is configured to segment the filtered vibration components before processing to determine the physiological velocity data, wherein one or more vibration segments are of equal or variable length, and wherein the variable-length segments are selected to have a duration within a range of 2 to 10 seconds.
[20] In some embodiments, the processor is configured to segment and label the low and high-frequency components of the time series vibration data using statistical heuristics-based methods, semi-Bayesian methods, or deep learning-based segmentation models.
[21] In some embodiments, the processor determines the at least one physiological velocity data by extracting, from the filtered time-series vibration data, statistical features including one or more of mean, median, variance, standard deviation, skewness, or kurtosis; non-linear entropy features including one or more of Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, permutation entropy, or spectral entropy; applying a feature selection technique comprising one or more of low variance filtering, high correlation filtering, random forest-based selection, or forward feature selection on extracted features; and determining, using the one or more machine learning models, at least one physiological parameter based on the selected features.
[22] According to a second aspect, there is provided a method for determining non- invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models. The method comprises: emitting, using an optical device, coherent laser light on one or more body regions of a subject; capturing, using an image-capturing device, reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject; obtaining, from the image-capturing device, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantifying, using a motion description model, motion in consecutive frames of the reflected light images or the videos; converting data characterizing quantified motion into a time series vibration data; applying one or more bandpass filters on the timeseries vibration data to isolate frequency components correspond to one or more physiological parameters, the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof.
[23] In some embodiments, the method determines the heart sound segments, the respiratory sound segments, the physiological velocity segments data using one or more first machine learning models. The one or more first machine learning models are trained using a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative of at least one of: (a) a heart sound class, including SI, S2, murmur, or abnormal heart sound; (b) a respiratory sound class, including wheeze, crackle, or normal breath sound; and (c) other physiological velocity value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
[24] In some embodiments, the method reconstructs, using one or more second machine learning models, the ECG signal based on the from the filtered time-series vibration data. The one or more second machine learning models are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals. The reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform.
[25] In some embodiments, the method determines, using the one or more third machine learning models, the respiration rate of the subject based on the from the filtered time-series vibration data. The one or more third machine learning models are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments.
[26] In some embodiments, the method determines, using the one or more fourth machine learning models, the heart rate of the subject based on the from the filtered timeseries vibration data. The one or more fourth machine learning models are trained using labeled datasets comprising: vibration-based time-series signals corresponding to cardiac- induced motion or vibrations from the subject’s body and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
[27] In some embodiments, the method determines the at least one physiological parameter by extracting, from the from the filtered time-series vibration data, statistical features including one or more of mean, median, variance, standard deviation, skewness, or kurtosis; non-linear entropy features including one or more of Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, permutation entropy, or spectral entropy; applying a feature selection technique comprising one or more of low variance filtering, high correlation filtering, random forest-based selection, or forward feature selection on extracted features; and determining, using the one or more machine learning models, at least one physiological parameter based on the selected features.
[28] According to a third aspect, there is provided a computer program product acomprising a non-transitory computer-readable storage medium having computer-readable instructions stored thereon, computer-readable instructions being executable by a computerized device comprising processing hardware to execute a method of determining non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models, wherein the method comprises: The method comprises: emitting, using an optical device, coherent laser light on one or more body regions of a subject; capturing, using an image-capturing device, reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject; obtaining, from the imagecapturing device, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantifying, using a motion description model, motion in consecutive frames of the reflected light images or the videos; converting data characterizing quantified motion into a time series vibration data; and applying one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, wherein the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof. The one or more body regions comprise head, neck, chest, back, stomach, hand, leg area, or combinations thereof.
[29] The method, system, and computer program provide several benefits by analyzing subtle dynamic patterns (e.g., quantified motion, speckle variations) within reflected biophotonic signals, rather than relying solely on conventional methods like photoplethysmography (PPG) amplitude or pulse timing. The system and method enable extraction of richer and potentially more accurate physiological information non-invasively. The processing technique, involving quantifying tissue surface motion dynamics (velocity, pressure) from light signals, establishes a pathway to derive complex vitals, such as detailed heart sounds (S1-S4, murmurs), respiratory sounds, and indicators of turbulence, which are challenging for existing non-invasive optical methods. This is achieved through the integration of specific biophotonic sensing with signal processing and sophisticated machine learning pipelines, capable of handling complex feature extraction, robust analysis, and potentially personalization using longitudinal data. The system's ability to derive a comprehensive suite of parameters from a potentially single sensing modality offers advantages in usability and potential for integration into various form factors.
[30] Therefore, in contradistinction to existing solutions which may be limited in accuracy or the range of parameters derived, the system and method of the present disclosure provide improved non-invasive physiological monitoring by leveraging analysis of dynamic biophotonic signals and advanced machine learning, well suited for continuous health tracking, diagnostics, and personalized medicine applications.
[31] These and other aspects of the disclosure will be apparent from the implementation(s) described below.
BRIEF DESCRIPTION OF THE DRAWINGS [32] The embodiments herein will be better understood from the following detailed descriptions with reference to the drawings, in which:
[33] FIG. 1 is a block diagram illustrating a system for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure.
[34] FIG. 2 is an exemplary optical device in accordance with the present disclosure.
[35] FIG. 3 is a block diagram of a server of FIG. 1 in accordance with the present disclosure.
[36] FIG. 4 is a block diagram of a physiological parameter determining module of FIG. 3 in accordance with the present disclosure.
[37] FIGS. 5A and 5B illustrate a method for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure.
[38] FIG. 6 is a schematic diagram of a computer architecture for executing the embodiments in accordance with the present disclosure.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[39] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[40] As mentioned, there remains a need for a non-invasive approach for physiological monitoring. The present disclosure provides a system and method for non- invasively determining physiological parameters of a subject by analyzing dynamic biophotonic signals obtained from biological tissue using one or more machine learning models. The disclosed system and method addresses the limitations of existing techniques by enabling accurate, high-fidelity measurement of various bio-vitals potentially exceeding established accuracy grades, derived from subtle skin vibrations, regardless of sensor placement on various body locations or a need to sense through thin clothing, generalizing to diverse physiological conditions (like cardiovascular or respiratory diseases) and user demographics, and facilitating machine learning-driven advanced analysis capabilities with normative modeling, longitudinal tracking for monitoring changes over time, predictive/prognostic insights for anticipating health trajectories, and automated signal interpretation including heart sound labeling. To make solutions of the disclosure more comprehensible for a person skilled in the art, the following implementations of the disclosure are described with reference to the accompanying drawings.
[41] Terms such as "a first", "a second", "a third", and "a fourth" (if any) in the summary, claims, and foregoing accompanying drawings of the disclosure are used to distinguish between similar objects and are not necessarily used to describe a specific sequence or order. It should be understood that the terms so used are interchangeable under appropriate circumstances so that the implementations of the disclosure described herein are, for example, capable of being implemented in sequences other than the sequences illustrated or described herein. Furthermore, the terms "include" and "have" and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, a method, a system, a product, or a device that includes a series of steps or units, is not necessarily limited to expressly listed steps or units but may include other steps or units that are not expressly listed or that are inherent to such process, method, product, or device.
[42] Referring now to the drawings and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[43] FIG. 1 is a block diagram illustrating a system 100 for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure. The system 100 includes an optical device 104, an image-capturing device 106, and a server 108. The server 108 is communicatively connected to the image-capturing device 106. The optical device 104 is configured to emit coherent laser light on one or more body regions of a subject. The optical device 104 is configured to emit coherent laser light or other structured light onto one or more body regions of a subject 102, including but not limited to head, neck, chest, back, stomach, hand, leg area, or other body regions. Coherent laser light refers to electromagnetic radiation emitted by a laser source in which the light waves maintain a constant phase relationship over time and space. In some embodiments, the optical device 104 may include one or more laser diodes, collimating optics, and beam-shaping components to produce a controlled and uniform illumination field. The coherent light source utilizes wavelengths ranging from 400 nanometres (nm) to 2500 nm for optimal analysis, emitting a power output between 0.1 milliwatts (mW) and 5 mW. In some embodiments, the optical device 104 may be integrated with wavelength control mechanisms for wavelength-specific tissue penetration or speckle pattern formation to facilitate motion capture through reflected light intensity variations. The optical device 104 emits the laser light through an optical amplification, based on stimulated emission of electromagnetic radiation which means that emitted photons (light particles) have same frequency and phase, traveling in the same direction and maintaining a consistent wavelength. This coherence enables the laser light to be highly absorbed, creating a tight beam with minimal divergence. The coherent laser light is directed towards the user's skin, either directly or through clothing. In some embodiments, the optical device 104 may be a handheld device, a mobile phone, a Kindle, a Personal Digital Assistant (PDA), a tablet, a music player, a computer, a laptop, an electronic notebook, or a Smartphone. In some embodiments, the optical device 104 may be wearable.
[44] The image capturing device 106 is configured to detect and capture the light reflected from the illuminated body region. The image-capturing device 106 may include a Complementary Metal-Oxide-Semiconductor (CMOS) camera, a Charge-Coupled Device (CCD) camera, a mouse optical sensor, a Raspberry Pi camera, an infrared (IR) camera, a smartphone camera, or a virtual reality device camera, and may operate in visible, nearinfrared, or multispectral imaging modes. In some embodiments, the image capturing device 104B may be a high-frequency camera. In some embodiments, the image capturing device 104B may be handheld, a camera, an infrared (IR) camera, a smartphone, a mobile phone, a virtual reality device, or any kind of imaging-capturing device. The image-capturing device 106 captures the temporal and spatial variations in the reflected light as a sequence of image frames, forming either a video stream or a set of reflected light images. The variations in the captured images represent minute motion or vibration patterns resulting from physiological events, such as cardiac pulses, respiratory cycles, and vascular micro-movements. For instance, the vibrations are generated by mechanical contractions of a heart muscle, opening and closing of heart valves, and laminar and turbulent blood flow within the cardiovascular in the subject 102. The image-capturing device 106 acquires the reflected light at a high sampling frequency of at least 600 Hz to 1.2 kilohertz (k Hz), exceeding 1.6 kHz. In some embodiments, the image-capturing device 106 acquires the reflected light at sampling frequency, typically at least 20 Hz and often exceeding 200 Hz. The data characterized reflected light may be a reflected light image or a video of the subject 102 that is recorded by the image capturing device 104B. The data characterized reflected light may be an MPEG-4 Part 14 (MP4) format file or a numerical array. The data characterized reflected light comprises low and high frequency components. More particularly, the data characterized reflected light encodes information about skin vibrations caused by physiological activity associated with the subject 102.
[45] The vibration data captured from a subject's body varies significantly depending on body region being monitored, as each region reflects distinct physiological activities with unique signal characteristics. For example, in a neck region (jugular and carotid area), vibrations primarily represent jugular venous and carotid artery pulsations, showing moderate amplitude and high-frequency components related to cardiac and respiratory motion, which can be used to extract parameters like blood flow velocity and heart sounds. In a chest region, especially the precordial area, the signals are rich in low- to mid-frequency components and higher in amplitude, capturing mechanical heart sounds (SI, S2, murmurs) and respiratory vibrations, useful for determining heart and respiratory rates. The abdominal region records irregular, low-frequency, burst-like patterns caused by gastrointestinal motility and respiratory-induced abdominal wall movements, while peripheral limb regions capture low to moderate amplitude vibrations from arterial pulsations and muscular micro-movements, aiding in the assessment of peripheral pulse velocity and neuromuscular activity. To handle these variations, the system 100 incorporates region- aware preprocessing and analysis techniques such as adjusting filter settings, feature extraction parameters, and applying region-specific machine learning models, thereby enabling accurate interpretation of physiological parameters from the corresponding vibration data.
[46] Optionally, the image capturing device 106 may include supplementary components. For instance, a lens and/or filter assembly may be employed in conjunction with a sensor module of the image capturing device 106 to optimize light capture, potentially focus the reflected light, and filter out unwanted ambient light using techniques like bandpass filtering. Motion sensors, such as accelerometers and gyroscopes, may be included to detect user movement, allowing a software system to compensate for motion artifacts in the captured data. A communication module, such as Bluetooth or Wi-Fi, can facilitate wireless data transmission between the image capturing device 106 and external devices or networks, such as cloud servers or Electronic Health Record (EHR) systems. A microcontroller or similar processing unit manages the operation of the hardware components, facilitates data transfer, synchronizes data streams, and may perform on-edge computing tasks, including initial data filtering, pre-processing, and data anonymization, governed by firmware/middleware.
[1] In some embodiments, the optical device 104 and the image capturing device 106 are integrated into a single unit. The integrated unit facilitates precise optical alignment, reduces system footprint, and improves portability and ease of use for non-invasive physiological monitoring. The integrated unit may include a shared housing that encapsulates the coherent light source and the image sensor, along with necessary optical elements such as lenses, mirrors, filters, and beam shapers. This arrangement supports synchronized light emission and image acquisition, allowing for real-time collection of reflected light signals from the subject’s body region. The integrated unit may also contain embedded electronics for on-board pre-processing, power regulation, and wireless communication with the server 10. In some embodiments, the integrated unit can be configured as a standalone wearable device worn directly on the user's body, such as an armband, a wrist-worn device (like a watch or bracelet), a finger-worn device (like a ring), or an ear-worn device (like an earplug or integrated into a hearing aid/headphone). Alternatively, the coherent laser source and image sensor modules can be integrated into various existing wearable items, including but not limited to headbands, straps, ankle bracelets, helmets, chokers, glasses, garments (shirts, bras, underpants, gloves, shoes), wearable patches adhering to the skin, or other wearable medical devices. Furthermore, the module can be integrated into non-wearable devices where the laser is positioned to interact with the user's body, directly or through clothing. Examples include integration into other medical or wellness devices, fitness equipment, mobile phones, smart mirrors, bed sensors, toilets or toilet seats, chairs, tables, doors, car components, or even a computer mouse. Multiple such apparatuses or integrated modules may be deployed simultaneously on a single user or across multiple users, synchronized to capture data from various body locations concurrently, providing a comprehensive physiological assessment. The disclosed embodiments are exemplary, and the system can be adapted to numerous other form factors and integration scenarios.
[47] In some embodiments, the optical device 104 and the image capturing device 106 are separate units, thereby enabling flexibility in positioning and targeting, enabling the system to adapt to different use cases or subject anatomies. For example, the optical device 104 may be placed at a fixed angle relative to the subject’s body, while the image-capturing device 106 is positioned independently to optimize the viewing angle, field of view, or minimize specular reflections. The separation also allows for customizable baselines in stereo or multi-angle setups for enhanced depth resolution or motion triangulation. In such modular systems, synchronization between the optical and imaging devices may be achieved via wired or wireless signaling protocols, ensuring temporal coherence between emitted and reflected light frames. Both configurations such as integrated unit and separate units may include calibration procedures to account for environmental variables such as ambient lighting, distance variations, or motion artifacts.
[48] The server 110 is communicatively coupled with the optical device 104 and the image capturing device 106 via a network 108. The network 108 may be a wireless network, a wired network, a combination of a wireless network and a wired network, or an Internet. The server 110 includes one or more processors and memory storing computer- readable instructions. When executed, the instructions cause the processor to obtain, from the image-capturing device 106, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantify, using a motion description model, motion in consecutive frames of the reflected light images or the videos; convert data characterizing quantified motion into a time series vibration data; apply one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof.
[49] The server 110 comprises one or more first machine learning models 112A, one or more second machine learning models 112B, one or more third machine learning models 112C, and one or more fourth machine learning models 112D. The processor is configured to determine the heart sound, the respiratory sound, the physiological velocity data using one or more first machine learning models 112A. The processor is configured to reconstruct, using one or more second machine learning models 112B, the ECG signal based on the one or more vibration segments. The processor is configured to determine, using the one or more third machine learning models 112C, the respiration rate of the subject based on the one or more vibration segments. The processor is configured to determine, using the one or more fourth machine learning models 112D, the heart rate of the subject based on the one or more vibration segments.
[50] In some embodiments, the processor is configured to determine a health condition of the subject based on determined physiological parameters. The processor may analyze one or more combinations of physiological parameters including, but not limited to, heart sound, respiratory sound, other physiological velocity data, electrocardiogram (ECG) signal, heart rate, respiration rate, or combinations thereof. The processor may compare the determined parameters against pre-defined clinical thresholds, baseline measurements, or population-level statistical models to identify abnormal patterns indicative of potential health conditions.
[51] In some embodiments, the processor employs one or more machine learning models, such as decision trees, support vector machines, deep neural networks, or ensemble models, trained on labeled physiological datasets to classify the subject's physiological state into one or more health condition categories. The health condition may include cardiovascular disorders (e.g., heart valve diseases, coronary artery diseases, heart failure, arrhythmias), respiratory conditions (e.g., apnea, dyspnea, bronchial obstruction), circulatory anomalies (e.g., peripheral vascular disease), or neuromuscular abnormalities (e.g., irregular thoracic motion).
[52] In some embodiments, the system may further incorporate longitudinal monitoring data, previous health records, or subject-specific reference parameters to improve the specificity and sensitivity of the health condition detection. Alerts or diagnostic flags may be generated automatically and communicated to clinicians or caretakers through an integrated interface or external device.
[53] FIG. 2 is an exemplary optical device 200 in accordance with the present disclosure. It is to be noted that the optical device 200 is for exemplary purpose only and that various modifications and alternative configurations may be employed without departing from the scope of the present disclosure. In some embodiments, the optical device 200 comprises a coherent light source, such as a laser diode or vertical-cavity surface-emitting laser (VCSEL), configured to emit light of a specified wavelength and coherence suitable for detecting micro-vibrations from a surface of the subject’s body. The optical device 200 may include collimating and focusing optics, beam-shaping elements, and optical isolators to maintain beam quality and reduce back reflections.
[54] In some embodiments, the optical device 200 further comprises one or more beam steering or scanning modules, such as galvanometric mirrors, MEMS-based scanning units, or optical prisms, to dynamically direct the light beam across specific regions of interest on the subject’s body (e.g., neck, chest, or abdomen). Additionally, the optical device 104 may include polarization controllers or filters to enhance signal specificity based on the reflective properties of the skin and subcutaneous tissue.
[55] The optical device 200 may be configured to operate in continuous wave (CW) or pulsed modes depending on a desired temporal resolution, safety thresholds, and power consumption requirements. In some embodiments, the optical device 200 may be integrated with photodetectors or optoelectronic receivers to capture backscattered or reflected light, which is then routed to the image-capturing device or directly processed for motion analysis.
[56] FIG. 3 is a block diagram of the server 108 of FIG. 1 in accordance with the present disclosure. The server 108 includes a database 300, an input receiving module 302, a motion description module 304, a conversion module 306, a filtering module 308, a segmenting and labelling module 310, a physiological parameter determining module 312, and a health condition determining module 314. It is to be understood that the delineation of these modules is for illustrative purposes only. In some embodiments, one or more of the modules may be combined into a single module, or subdivided into multiple sub-modules, depending on implementation requirements, computational architecture, or software design preferences. Additionally, the functionalities described herein may be implemented using a combination of hardware, software, firmware, or any suitable processing logic.
[57] In some embodiments, the database 300 stores raw and pre-processed reflected light data, motion descriptors, time-series vibration signals, segmented data, extracted features, intermediate and final physiological parameters, reconstructed ECG signals, and health condition classification results. The database 300 may also maintain subject-specific data, such as baseline physiological measurements, historical health records, longitudinal monitoring data, and metadata associated with the imaging sessions (e.g., time, date, sensor configuration, or environmental conditions). Longitudinal monitoring data refers to physiological or health-related data that is collected from the same subject over an extended period of time, often at multiple time points. In some embodiments, the database 300 further includes labeled datasets used to train, validate, and test the one or more machine learning models employed within the system 100. The data may include ground-truth physiological parameters obtained via reference-grade medical devices (e.g., ECG machines, spirometers) used during the model training process. The database 300 may be implemented using structured query language (SQL) or NoSQL-based storage systems and can reside on a local server, cloud infrastructure, or a distributed architecture. Access to the database 300 may be governed by encryption protocols and role-based access control to ensure data privacy, compliance with health data regulations (e.g., HIPAA or GDPR), and secure integration with clinical systems, where applicable.
[58] The input receiving module 302 is configured to receive data characterizing reflected light, from the body regions of the subject 102, from various input sources, including the image-capturing device 106, the optical device 104, and any additional external sensors or devices that provide bio-photonic signals. The input receiving module 302 processes the incoming data streams, converting them into a standardized format suitable for subsequent analysis by the other modules within the system 100.
[59] In some embodiments, the input receiving module 302 may also handle data from external sources such as patient records, previous health measurements, or additional sensor inputs (e.g., temperature sensors, motion detectors, etc.). The input receiving module 302 ensures that all data is time-synchronized and tagged with relevant metadata, such as subject identifiers, time stamps, and session details, enabling efficient storage and retrieval for further processing. Furthermore, the input receiving module 302 may incorporate errorchecking routines to ensure the integrity of the incoming data, discarding corrupted or incomplete signals to ensure only high-quality data is passed on to the next stages of processing.
[60] The motion description module 304 is configured to quantify motion existing in consecutive frames of the reflected light video or image of the subject 102 using motion description algorithms. The motion description algorithms analyse changes in pixel intensities in between frames of the reflected light video to quantify motion. In some embodiments, the module may employ techniques such as optical flow, frame differencing, phase-based motion estimation, or block matching to capture micro-movements on the tissue surface that are reflective of underlying physiological activity. The motion descriptors may represent velocity vectors, displacement fields, or energy distributions, and serve as a basis for generating vibration signals corresponding to physiological processes. In some embodiments, the motion description module 304 may further apply spatial averaging, frame summation, or dimensionality reduction techniques (e.g., PCA) to enhance signal -to-noise ratio and reduce computational complexity.
[61] The conversion module 306 is configured to receive at least one of the data characterizing quantified motion and convert into a time series vibration data by integrating total energy distribution in the reflected light across the consecutive frames in the video. The integration of the total energy distribution in the reflected light includes aggregating intensity values of pixels across the consecutive frames (i.e., time series data). The time series vibration data represent dynamics of the reflected light over time. The time series data include features of the reflected light relevant to physiological events.
[62] The filtering module 308 is configured to apply one or more bandpass filters on the time series vibration data to isolate frequency components of interest related to the one or more physiological parameters, and normalize an amplitude of the time series data for scaling the time series data to a range, for e.g., [-1, 1], or standardizing the time series vibration data to have zero mean and unit variance. The bandpass filter selectively passes the frequency components pertinent to the the one or more physiological parameters by allowing the frequency components within a range, for e.g., 20 hertz (Hz) to 750 Hz to pass through. The bandpass filter eliminates frequency components outside the range. In some embodiments, multiple bandpass filters may be applied in parallel to extract distinct bands corresponding to different physiological sources, such as low-frequency bands for respiratory activity (e.g., 0.1 Hz to 0.5 Hz), mid-frequency bands for heart sounds (e.g., 20 Hz to 150 Hz), and high-frequency components for vascular or muscular microvibrations (e.g., 150 Hz to 750 Hz). The filtering module 308 may also implement adaptive filtering strategies where filter parameters are dynamically adjusted based on signal characteristics, subject-specific baselines, or known noise patterns. Additionally, artifact suppression techniques, such as notch filtering (e.g., at 50/60 Hz) or empirical mode decomposition (EMD), may be employed to further enhance signal quality. The output of the filtering module 308 is a clean, frequency-isolated, and amplitude-normalized time-series signal, which is then forwarded to the segmenting and labelling module 310 for further processing.
[63] The segmenting and labelling module 310 is configured to divide the processed time series vibration data into equal or variable-length segments or epochs. The segmenting module 310 selects the variable-length segments for duration, for e.g., 5-10 seconds based on the analysis requirements, desired temporal resolution, or physiological event cycles (such as cardiac or respiratory cycles). In some embodiments, the segmentation process is guided by heuristic rules, statistical change point detection, signal energy thresholds, or machine learning-based dynamic windowing techniques that account for the variability in signal patterns and subject-specific rhythms. For instance, higher energy regions may indicate cardiac events, while lower frequency periodic patterns may be aligned with respiratory cycles. Additionally, the segmenting and labelling module 310 may utilize overlap-based windowing to preserve continuity between segments, improving robustness in downstream analysis. The segments are further labelled or annotated with frequency band classifications (e.g., low-frequency or high-frequency components), which can aid in feature extraction and classification stages. The segmented and labelled time series data is then passed to the physiological parameter determining module 312 for extracting relevant features and determining physiological parameters using trained machine learning model
[64] The physiological parameter determining module 312 is configured to determine, using one or more statistical models and/or machine learning models, at least one physiological parameter by analyzing the one or more vibration segments labelled with frequency components. The physiological parameter includes heart sound segments, respiratory sound segments, physiological velocity data segments, electrocardiogram (ECG) signal, heart rate, respiration rate, or combinations thereof. In some embodiments, different machine learning models may be specialized for different physiological parameters. For instance, the first machine learning models 112A may classify segments as heart or respiratory sounds, the second machine learning models 112B may reconstruct ECG-like signals based on temporal and spectral features extracted from vibration data, while the third machine learning models 112C may estimate the respiration rate and the third machine learning models 112D may estimate the heart rate. The classification into first, second, third, and fourth models is made for differentiation and explanation purposes only; in practice, a single machine learning model may be capable of estimating multiple physiological parameters, or the system may utilize a combination of multiple specialized models operating in parallel or in sequence. In some embodiments, the one or more machine learning models may also incorporate patient-specific information, such as previous medical records, clinical history, or demographic data (e.g., age, sex, weight, pre-existing conditions), to enable personalized analysis and improve the accuracy and reliability of physiological parameter estimation. The machine learning models are typically deep learning models such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs). The physiological parameter determining module 312 and its machine learning components are explained in further detail with reference to FIG. 4, which illustrates the architecture and interaction of various submodules and models used for determining physiological parameters from the vibration data.
[65] The health condition determining module 314 is configured to determine the health condition of the subject based on determined physiological parameters. The health condition determining module 314 may analyze one or more combinations of physiological parameters including, but not limited to, heart sound, respiratory sound, physiological velocity data, electrocardiogram (ECG) signal, heart rate, respiration rate, or combinations thereof. The health condition determining module 314 may compare the determined parameters against pre-defined clinical thresholds, baseline measurements, or populationlevel statistical models to identify abnormal patterns indicative of potential health conditions. In some embodiments, the health condition determining module 314 employs one or more machine learning models, such as decision trees, support vector machines, deep neural networks, or ensemble models, trained on labeled physiological datasets to classify the subject's physiological state into one or more health condition categories. The health condition may include cardiovascular disorders (e.g., heart valve diseases, coronary artery diseases, heart failure, arrhythmias), respiratory conditions (e.g., apnea, dyspnea, bronchial obstruction), circulatory anomalies (e.g., peripheral vascular disease), or neuromuscular abnormalities (e.g., irregular thoracic motion).
[66] FIG. 4 is a block diagram of the physiological parameter determining module 312 of FIG. 3 in accordance with the present disclosure. The physiological parameter determining module 312 includes a feature extraction module 402, a feature selection module 404, a heart sound determination module 406, a respiratory sound determination module 408, a physiological velocity data determination module 410, an electrocardiogram (ECG) signal reconstruction module 412, a respiration rate determination module 414, and a a heart rate determination module 416. In some embodiments, one or more of these modules may be combined into a single module, or alternatively, further subdivided into submodules, depending on system architecture, processing requirements, or specific deployment scenarios.
[67] The feature extraction module 402 is configured to extract statistical features, complex features or combined features, non-linear entropy features, Independent Component Analysis (ICA) features, and wavelet-based features from the one or more vibration segments labelled with low and high frequency components. The statistical features include information on the distribution and shape of the vibration segments. The statistical features include mean, median, variance, standard deviation, skewness, or kurtosis. The combined features include information on higher-order characteristics of the one or more vibration segments labelled with low and high frequency components. The combined features includePeak-Peak mean, mean square value, Hjorth parameter activity, Hjorth parameter mobility, Hjorth parameter complexity, maximum Power Spectral Frequency, Maximum Power Spectral Density (PSD), or power sum. The non-linear entropy features provide information about the irregularity, complexity, and predictability of heart sound signals or ECG signals in the one or more vibration segments labelled with low and high frequency components. The non-linear entropy features are Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, CO complexity, correlation dimension, Lyapunov Exponent, Permutation entropy, or spectral entropy. The ICA features and the wavelet-based features include additional information about the heart sound data
[68] The feature selection module 404 is configured to select relevant features from the statistical features, the complex features or combined features, the non-linear entropy features, the ICA features, and the wavelet-based features using a feature selection technique. The feature selection technique includes low variance filters, high correlation filters, random forests, and forward feature selection, feature selection module 404 selects the relevant features from the statistical features, the complex features or combined features, the non-linear entropy features, the ICA features, and the wavelet-based features based on one or more covariates of the subject 102. The one or more covariates may include gender, age, or Body Mass Index (BMI) of each subject.
[69] The feature selection technique analyses the one or more vibration segments labelled with low and high frequency components to extract the features of the cardiovascular conditions . If the features in the one or more vibration segments labelled with low and high frequency components are related to abnormal heart rhythms or murmurs, the feature selection technique assigns higher weights, since abnormal heart rhythms or murmurs are directly related to cardiovascular conditions. If features in the one or more vibration segments labelled with low and high frequency components are related to background noise or irrelevant physiological parameters, the feature selection technique assigns lower weights or scores.
[70] The heart sound determination module 406 is configured to classify the low- and high-frequency components of the vibration segments into cardiac-related acoustic events, including heart sounds (SI and S2) and murmurs (S3 and S4). The SI and S2 heart sounds are primarily generated by the closure of the atrioventricular (mitral and tricuspid) and semilunar (aortic and pulmonary) valves, respectively, during the cardiac cycle. The S3 and S4 sounds, often categorized as murmurs or additional heart sounds, may be associated with ventricular filling dynamics and can be indicative of underlying cardiovascular abnormalities, although they are not always pathological. The heart sound determination module 406 processes the time-series vibration data by analyzing frequency patterns and temporal features that correspond to turbulent blood flow, often caused by valvular insufficiency, stenosis, or the interaction of blood with arterial plaques or cholesterol deposits. These acoustic signatures help in identifying potential cardiovascular issues with increased diagnostic granularity. The heart sound determination module 406 determines the heart sound using the one or more first machine learning models 112A. The one or more first machine learning models 112A are trained using a training dataset comprising labelled timeseries segments, each segment annotated with corresponding ground truth information indicative of a heart sound class, including SI, S2, murmur, or abnormal heart sound.
[71] The respiratory sound determination module 408 is configured to classify the low- and high-frequency components of the vibration segments into respiratory sound. The respiratory sound determination module 408 utilizes the one or more first machine learning models 112A trained on annotated datasets to distinguish between various types of respiratory sounds such as normal breath sounds, wheezes, crackles (rales), stridor, and rhonchi, based on frequency content, temporal patterns, and amplitude characteristics. The low-frequency components typically correspond to normal breathing patterns and broad airflow movements, while high-frequency components are often indicative of pathological respiratory conditions, such as airway obstructions, fluid accumulation, or restrictive lung diseases. The respiratory sound determination module 408 may employ feature extraction techniques (e.g., spectral entropy, wavelet features, and time-domain descriptors) to capture key characteristics of respiratory acoustics and apply classification models (e.g., convolutional neural networks, support vector machines, or ensemble models) to label each segment accordingly. In some embodiments, the respiratory sound determination module 408 may also analyze temporal consistency, duration, and bilateral sound symmetry to enhance diagnostic precision and differentiate between upper and lower respiratory tract anomalies.
[72] The physiological velocity data determination module 410 is configured to analyze the vibration segments and classify them into physiological velocity data, which includes, but not limited to, blood flow velocity, respiratory airflow velocity, or thoracic motion velocity. Using the first machine learning models 112A, the physiological velocity data determination module 410 processes the low- and high-frequency components of the vibration data to estimate hemodynamic parameters (e.g., cardiac output, peripheral circulation speed), respiratory airflow rates, or thoracic wall motion associated with chest expansion and contraction during breathing. For instance, blood flow velocity can be inferred from changes in vibrational frequency corresponding to arterial pulse waves or blood flow turbulence, while respiratory airflow velocity is derived from air movement dynamics within the tracheobronchial tree and alveolar regions. The physiological velocity data determination module 410 integrates longitudinal data and subject-specific information (e.g., age, gender, health conditions) to provide personalized velocity metrics, which may help detect abnormalities in circulatory health or lung function. The processed physiological velocity data is then used to inform further diagnostic or therapeutic decisions. The first machine learning models 112A may be trained with a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative physiological velocity value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
[73] The electrocardiogram (ECG) signal reconstruction module 412 is configured to reconstruct, using one or more second machine learning models 112B, the ECG signal based on the one or more vibration segments labelled with frequency components. The one or more second machine learning models 112B are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region, and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals, wherein reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform. The second machine learning models 112B are trained to capture both the temporal and morphological characteristics of the ECG waveform. The second machine learning models 112B learns to reconstruct P-waves, QRS complexes, and T-waves, as well as the overall rhythm and heart rate information from the vibration-based features, which reflect the cardiac electrical activity as captured in the reflected light data. The reconstructed ECG signal replicates temporal dynamics (e.g., the intervals between successive heartbeats) and morphological features (e.g., the amplitude and shape of the P, QRS, and T waves) of a physiological ECG waveform. This process allows for the non-invasive estimation of cardiac activity in subjects where direct electrode-based ECG recording is not feasible or preferred. Additionally, the system can adapt to individual subject characteristics, such as anatomical variations, cardiovascular health conditions, and motion artifacts in the data, ensuring accurate and personalized ECG reconstruction. The reconstructed ECG signals can then be used to perform further diagnostic assessments such as heart rate variability analysis, arrhythmia detection, or assessment of cardiac health in conjunction with other physiological parameters.
[74] To validate the reconstructed ECG signal, ECG signal obtained via electrode based method is processed and compared with the reconstructed ECG signal. First, filtering is performed to remove noise and artifacts from the reconstructed ECG signal. This may include low-pass filtering to eliminate high-frequency noise, high-pass filtering to remove baseline wander, bandpass filtering to isolate the relevant frequency range, and notch filtering to remove power line interference. Following this, the reconstructed ECG signal undergoes normalization, where it is standardized to have zero mean and unit variance, ensuring consistent signal amplitude across recordings. Noise correction is then applied to further refine the ECG signal. To enhance the quality of the ECG, automated noise correction and segment removal are performed. This involves detecting and removing corrupted cycles, ensuring only high-quality, representative cycles remain for further analysis and reconstruction. The cycle detection and windowing process begins by identifying R-peaks using algorithms such as wavelet transform or convolutional neural networks (CNNs). After detecting the peaks, the ECG signal is divided into windows centered around each R-peak, with the window size adjusted to include one complete cycle. The template cycle creation process involves selecting middle cycles from the recording, assuming that these cycles are more stable and representative of the overall signal. A mean cycle is then computed from these middle cycles to create an ideal cycle template for comparison. Matched filtering is then employed to identify the best cycles by calculating the correlation coefficient between each cycle and the template cycle. Cycles with a correlation coefficient below a threshold (e.g., r < 0.90) are considered too noisy and discarded, ensuring only consistent and high-quality cycles are retained for further analysis. The indices of these discarded cycles are also tracked to ensure they are excluded from other related data, such as phonocardiogram (PCG) data, guaranteeing the integrity of the final dataset used for ECG reconstruction validation.
[75] In some embodiments, the ECG signal reconstruction module 412 measures an accuracy of the P-wave, the QRS complex, and the T-wave durations in the reconstructed ECG signal by comparing the P-wave, the QRS complex, and the T-wave durations in the reconstructed ECG with historical ECG data. The ECG signal reconstruction module 412 generates a similarity score for the reconstructed ECG signal, such as P-Q interval, QRS complex, S-T segment, and T wave using cross-correlation or waveform similarity measures, (e.g., Cosine Similarity). The similarity score may be low or high. The high similarity score indicates that the reconstructed ECG signal replicates shape and timing of critical segments in the historic ECG data. The ECG signal reconstruction module 412 determines how the reconstructed ECG signal captures and preserves the irregular heart rhythms or arrhythmias exhibited in the historic ECG data.
[76] The ECG signal reconstruction module 412 measures heights or amplitudes of peaks (P, Q, R, S, T) in a reconstructed ECG signal with the historic ECG data. The peaks represent various phases of the heart's electrical activity. By calculating the mean absolute error or percentage error in the amplitudes of the peaks, the ECG signal reconstruction module 412 assesses how the reconstructed ECG signal matches the historic ECG data in terms of the amplitudes of the peaks. The ECG signal reconstruction module 412 assesses how accurately peaks, such as the R-R interval (the time between consecutive R waves), are timed in the reconstructed ECG signal compared to the historic ECG data. The ECG signal reconstruction module 412 evaluates sensitivity and specificity of peak detection (during identifying the R-peaks) in the reconstructed ECG signal relative to the historic ECG data. The sensitivity ensures all significant peaks in the reconstructed ECG signal are detected, and no important peaks are missed. The specificity ensures that there are no false peaks are introduced during the detection process and genuine peaks are identified.
[77] The ECG signal reconstruction module 412 measures variability in all RR intervals (consecutive heartbeats) in the ECG data using a Deviation of NN intervals (SDNN) metric. The SDNN compares SDNN values between the reconstructed ECG signal and the historic ECG data that indicates an accuracy overall heart rate variability. The server 110 measures the Root Mean Square of Successive Differences (RMSSD) that reflects heart rate variability(HRV), (i.e.,) the variability in the interval between adjacent heartbeats.
[78] The ECG signal reconstruction module 412 measures a power of the frequency component in a range of 0.04 to 0.15 Hz using a Low Frequency (LF) metric. The power of the frequency component reflects both sympathetic and parasympathetic activity on heart rate variability. The server 110 measures the power of the frequency component in a range of 0.15 to 0.4 Hz using High Frequency (HF) metric. The power of the frequency component reflects parasympathetic activity, particularly respiratory sinus arrhythmia. The ECG signal reconstruction module 412 calculates a ratio between the HF and the LF. The LF/HF ratio reflects the balance between sympathetic and parasympathetic nervous activities. The LF/HF ratio may be high or low. The higher LF/HF ratio indicates greater sympathetic dominance. The lower LF/HF ratio indicates increased parasympathetic activity relative to sympathetic activity. The ECG signal reconstruction module 412 compares a Power Spectral Density (PSD) of the reconstructed ECG signal and the historic ECG data that indicates how the reconstruction captures dynamic range and frequency characteristics of the ECG signal. The ECG signal reconstruction module 412 identifies and compares frequency components (e.g., the dominant frequencies within the LF and HF bands) may indicate how the reconstructed ECG signal matches the historic ECG data in terms of spectral content. The ECG signal reconstruction module 412 compares total power between the reconstructed ECG signals and the historic ECG data that is utilized to determine an overall energy captured by the reconstruction process. The total power represents the sum of powers across all frequency components in the reconstructed ECG signal. The ECG signal reconstruction module 412 quantifies complexity or regularity of the ECG signal using spectral entropy. The ECG signal reconstruction module 412 compares the complexity (murmurs) or regularity (heart sounds) of the ECG signal with the spectral entropy that indicates how the reconstructed signal is close to the complexity of the heartbeat.
[79] In some embodiments, the ECG signal reconstruction module 412 provides information about an assessment of various waves in the ECG signal, including P wave (atrial depolarization), Q wave, R wave, S wave (ventricular depolarization), T wave (ventricular repolarization), and U wave. The ECG signal reconstruction module 412 provides information about measurement of different intervals in the ECG signal, such as PR interval (atrioventricular conduction time), QRS interval (ventricular depolarization time), QT interval (total ventricular activity), and RR interval (heart rate). The ECG signal reconstruction module 412 provides information about the segments between the waves, including PR segment (atrial repolarization), ST segment (early ventricular repolarization), and TP segment (ventricular repolarization complete to next depolarization). The ECG signal reconstruction module 412 provides information about the QRS complex (ventricular depolarization). The ECG signal reconstruction module 412 provides information about the characterization of arrhythmias, including abnormalities in the regularity or pattern of the heart rhythm. The pattern of the heart rhythm includes conditions such as atrial fibrillation, bradycardia, tachycardia, or premature ventricular contractions (PVCs). The ECG signal reconstruction module 412 provides information about the calculation of the heart rate based on various intervals, such as the SI -SI interval (time between sequential heartbeats).
[80] The respiration rate determination module 414 is configured to the respiration rate of the subject based on the one or more vibration segments labelled with frequency components, using the one or more third machine learning models 112C. The one or more third machine learning models 112C are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity, and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments. The third machine learning models 112C analyse the selected features time interval between successive peaks or the dominant frequency components within the respiratory band (e.g., 0.1-0.5 Hz), which are indicative of the subject’s respiration rate. In some embodiments, adaptive learning techniques may be employed to account for variations due to posture, activity, or individual physiological differences. The respiration rate determination module 414 may also utilize ensemble learning or hybrid architectures that combine convolutional neural networks (CNNs) for spatial feature extraction and recurrent neural networks (RNNs), such as long short-term memory (LSTM) networks, to model the temporal dynamics of respiratory patterns. Furthermore, the module may incorporate subject-specific parameters, demographic data, and longitudinal respiration trends to enhance personalization and prediction robustness, thereby ensuring accurate and continuous estimation of respiration rate in real-time or near real-time conditions.
[81] The heart rate determination module 416 is configured to the heart rate of the subject based on the one or more vibration segments labelled with frequency components, using the one or more fourth machine learning models 112D. The one or more fourth machine learning models 112D are trained using labeled datasets comprising: vibrationbased time-series signals corresponding to cardiac-induced motion or vibrations from the subject’s body, and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate. The vibration segments may be preprocessed to enhance heartbeat-related features by applying bandpass filters targeting the heart rate frequency range (typically 0.8-20 Hz), followed by normalization to reduce amplitude variability across subjects or sessions.
[82] With reference to FIGS. 1-4, FIGS. 5A and 5B illustrate a method for non- invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models in accordance with the present disclosure. For purposes of clarity and to avoid redundancy, the descriptions of the hardware elements such as the optical device, image-capturing device, and processing units previously outlined in FIGS. 1-4 are not repeated herein. Rather, FIGS. 5A and 5B focus on the procedural flow of operations carried out by the system components for signal acquisition, processing, feature extraction, machine learning-based analysis, and physiological parameter determination. The method can be implemented in real-time or offline modes depending on the system configuration and application requirements. It is further noted that to avoid repetition, detailed technical descriptions of each step are omitted here as they have been comprehensively explained in association with the corresponding modules in earlier sections. The method can be implemented in real-time or offline modes depending on the system configuration and application requirements. At step 502, the method includes emitting, using the optical device 104, coherent laser light on one or more body regions of a subject. At step 504, the method includes capturing, using the image-capturing device 106, reflected light, from the one or more body regions of the subject, as an image or a video, the reflected light being indicative of vibrations generated by physiological events of the subject. At step 506, the method includes obtaining, from the image-capturing device 106, data characterizing reflected light from the one or more body regions of the subject, the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof. At step 508, the method includes quantifying, using the motion description model, motion in consecutive frames of the reflected light images or the videos. At step 510, the method includes converting data characterizing quantified motion into a time series vibration data. At step 512, the method includes applying one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters and optionally further segment filtered time-series vibration data into one or more vibration segments, the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof. aT
[83] FIG. 6 is a schematic diagram of a computer architecture 600 for executing the embodiments in accordance with the present disclosure. This schematic drawing illustrates a hardware or computure configuration of a server 110/computer system/ computing device in accordance with the embodiments herein. For instance, the server 110 comprises the computer architecture 600 for executing one or more functions in determining one or more physiological parameters. The computer architecture 600 includes at least one processing device CPU 10 that may be interconnected via system bus 14 to various devices such as a random-access memory (RAM) 12, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 38 and program storage devices 40 that are readable by the system. The system can read the inventive instructions on the program storage devices 40 and follow these instructions to execute the methodology of the embodiments herein. The system further includes a user interface adapter 22 that connects a keyboard 28, mouse 30, speaker 32, microphone 34, and/or other user interface devices such as a touch screen device (not shown) to the bus 14 to gather user input. Additionally, a communication adapter 20 connects the bus 14 to a data processing network 42, and a display adapter 24 connects the bus 14 to a display device 26, which provides a graphical user interface (GUI) 36 of the output data in accordance with the embodiments herein, or which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[84] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope.

Claims

CLAIMS What is claimed is:
1. A system for non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models, the system comprising: an optical device configured to emit coherent laser light on one or more body regions of a subject, and an image-capturing device configured to capture reflected light, from the one or more body regions of the subject, as an image or a video, wherein the reflected light being indicative of vibrations generated by physiological events of the subject, wherein the one or more body regions comprise head, neck, chest, back, stomach, hand, leg area, or combinations thereof; and a server communicatively connected to the image-capturing device, wherein the server comprises a memory storing a database and a set of modules; and a processor configured to execute the set of modules to: obtain, from the image-capturing device, data characterizing reflected light from the one or more body regions of the subject, wherein the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantify, using a motion description model, motion in consecutive frames of the reflected light images or the videos; convert data characterizing quantified motion into a time series vibration data; and apply one or more filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, wherein the filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof.
2. The system of claim 1, wherein the processor is further configured to determine heart sound segments, respiratory sound segments, other physiological velocity data segments using one or more first machine learning models, wherein the one or more first machine learning models are trained using a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative of at least one of:
(a) a heart sound class, including SI, S2, murmur, or abnormal heart sound;
(b) a respiratory sound class, including wheeze, crackle, or normal breath sound; and
(c) a physiological velocity value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
3. The system of claim 1, wherein the processor is further configured to reconstruct, using one or more second machine learning models, ECG signal based on the filtered time-series vibration data, wherein the one or more second machine learning models are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region, and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals, wherein reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform.
4. The system of claim 1, wherein the processor is configured to determine, using the one or more third machine learning models, respiration rate of the subject based on the filtered timeseries vibration data, wherein the one or more third machine learning models are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity, and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments.
5. The system of claim 1, wherein the processor is configured to determine, using the one or more fourth machine learning models, heart rate of the subject based on the filtered time- series vibration data, wherein the one or more fourth machine learning models are trained using labeled datasets comprising: vibration-based time-series signals corresponding to cardiac-induced motion or vibrations from the subject’s body, and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
6. The system of claim 1, wherein the optical device emits coherent laser light at wavelengths ranging from 400 nanometers (nm) to 2500 nm, with a power output between 0.1 milliwatts (mW) and 5 mW.
7. The system of claim 1, wherein the vibration data is acquired at a sampling frequency in a range of approximately 50 hertz (Hz) to at least 400 Hz, and up to 10000 Hz.
8. The system of claim 1, wherein the motion description model employs at least one of an optical flow method, a block matching algorithm, a phase-based method, a gradient-based method or a feature-based method.
9. The system of claim 1, wherein the processor is configured to standardize the time series vibration data to have zero mean and unit variance.
10. The system of claim 1, wherein the processor is configured to segment the filtered timeseries vibration data, wherein the one or more vibration segments are of equal or variable length, and wherein the variable-length segments are selected to have a duration within a range of 2 to 10 seconds.
11. The system of claim 1, wherein the processor is configured to label the filtered timeseries vibration data into low and high-frequency components using statistical heuristicsbased methods, semi-Bayesian methods, or deep learning-based segmentation models.
12. The system of claim 1, wherein the processor determines the at least one physiological parameter by extracting, from the filtered time-series vibration data, statistical features including one or more of mean, median, variance, standard deviation, skewness, or kurtosis; non-linear entropy features including one or more of Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, permutation entropy, or spectral entropy; applying a feature selection technique comprising one or more of low variance filtering, high correlation filtering, random forest-based selection, or forward feature selection on extracted features; and determining, using the one or more machine learning models, at least one physiological parameter based on the selected features.
13. A method for determining non-invasively determining physiological parameters from biophotonic signals of a subject using machine learning models, the method comprising: emitting, using an optical device, coherent laser light on one or more body regions of a subject; capturing, using an image-capturing device, reflected light, from the one or more body regions of the subject, as an image or a video, wherein the reflected light being indicative of vibrations generated by physiological events of the subject, wherein the one or more body regions comprise head, neck, chest, back, stomach, hand, leg area, or combinations thereof; obtaining, from the image-capturing device, data characterizing reflected light from the one or more body regions of the subject, wherein the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantifying, using a motion description model, motion in consecutive frames of the reflected light images or the videos; converting data characterizing quantified motion into a time series vibration data; and applying one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, wherein the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof.
14. The method of claim 13, where the method determines the heart sound segments, the respiratory sound segments, the physiological velocity data segments using one or more first machine learning models, wherein the one or more first machine learning models are trained using a training dataset comprising labelled time-series segments, each segment annotated with corresponding ground truth information indicative of at least one of:
(a) a heart sound class, including SI, S2, murmur, or abnormal heart sound;
(b) a respiratory sound class, including wheeze, crackle, or normal breath sound; and
(c) a physiological velocity value, including blood flow velocity, respiratory airflow velocity, thoracic or diaphragmatic motion velocity, gastrointestinal motility related velocity (derived from bowel sounds), peripheral artery bruits velocity, and carotid artery bruits velocity.
15. The method of claim 13, where the method reconstructs, using one or more second machine learning models, ECG signal based on the o filtered time-series vibration data, wherein the one or more second machine learning models are trained using a plurality of paired datasets comprising: time-synchronized vibration-based time-series segments derived from reflected light data of a subject’s body region; and corresponding ground truth ECG signals recorded using electrode-based systems, such that the second machine learning models learn a mapping from the vibration-based input features to the electrical cardiac activity patterns represented in ECG signals, wherein reconstructed ECG signal replicates temporal and morphological characteristics of a physiological ECG waveform.
16. The method of claim 13, where the method determines, using the one or more third machine learning models, respiration rate of the subject based on the filtered time-series vibration data, wherein the one or more third machine learning models are trained using labelled datasets comprising: vibration-based time-series signals corresponding to thoracic or upper body motion associated with respiratory activity; and reference respiration rate data obtained from clinical-grade respiratory monitoring devices, such that the third machine learning models learn to identify periodic respiratory patterns and compute the respiration rate by analyzing cyclic features, frequency components, or temporal intervals within the one or more vibration segments.
17. The method of claim 13, where the method determines, using the one or more fourth machine learning models, heart rate of the subject based on the one or more vibration segments labelled with frequency components, wherein the one or more fourth machine learning models are trained using labeled datasets comprising: vibration-based time-series signals corresponding to cardiac-induced motion or vibrations from the subject’s body; and reference heart rate data obtained from electrocardiogram (ECG) or pulse oximeter devices, such that the fourth machine learning models learn to detect the periodicity of heartbeats within the vibration segments by analyzing temporal intervals, frequency patterns, and dynamic features within the one or more vibration segments, enabling accurate determination of the heart rate.
18. The method of claim 13, where the method determines the at least one physiological parameter by extracting, from the filtered time-series vibration data, statistical features including one or more of mean, median, variance, standard deviation, skewness, or kurtosis; non-linear entropy features including one or more of Shannon entropy, singular entropy, Kolmogorov entropy, approximate entropy, permutation entropy, or spectral entropy; applying a feature selection technique comprising one or more of low variance filtering, high correlation filtering, random forest-based selection, or forward feature selection on extracted features; and determining, using the one or more machine learning models, at least one physiological parameter based on the selected features.
19. A computer program product comprising a non-transitory computer-readable storage medium having computer-readable instructions stored thereon, computer-readable instructions being executable by a computerized device comprising processing hardware to execute a method of determining non-invasively determining physiological parameters from bio-photonic signals of a subject using machine learning models, wherein the method comprises: emitting, using an optical device, coherent laser light on one or more body regions of a subject; capturing, using an image-capturing device, reflected light, from the one or more body regions of the subject, as an image or a video, wherein the reflected light being indicative of vibrations generated by physiological events of the subject, wherein the one or more body regions comprise head, neck, chest, back, stomach, hand, leg area, or combinations thereof; obtaining, from the image-capturing device, data characterizing reflected light from the one or more body regions of the subject, wherein the data characterizing reflected light comprises one or more reflected light images, one or more reflected light videos, or combinations thereof; quantifying, using a motion description model, motion in consecutive frames of the reflected light images or the videos; converting data characterizing quantified motion into a time series vibration data; and applying one or more bandpass filters on the time-series vibration data to isolate frequency components correspond to one or more physiological parameters, wherein the bandpass filters selectively pass frequency components pertinent to the one or more physiological parameters resulting in selective physiological velocity data such as heart sounds, respiratory sound, blood flow sounds, respiratory airflow sounds, thoracic or diaphragmatic sounds, gastrointestinal motility related sounds (derived from bowel sounds), peripheral artery bruits, carotid artery bruits, or combinations thereof;
PCT/EP2025/060685 2024-04-17 2025-04-17 System and method for determining physiological parameters of subject from biophotonic signals using machine learning Pending WO2025219540A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463635049P 2024-04-17 2024-04-17
US63/635,049 2024-04-17

Publications (1)

Publication Number Publication Date
WO2025219540A1 true WO2025219540A1 (en) 2025-10-23

Family

ID=95480675

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/EP2025/060582 Pending WO2025219489A1 (en) 2024-04-17 2025-04-16 System and method for determining calibrated blood pressure from biophotonic signals using machine learning
PCT/EP2025/060685 Pending WO2025219540A1 (en) 2024-04-17 2025-04-17 System and method for determining physiological parameters of subject from biophotonic signals using machine learning

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/EP2025/060582 Pending WO2025219489A1 (en) 2024-04-17 2025-04-16 System and method for determining calibrated blood pressure from biophotonic signals using machine learning

Country Status (1)

Country Link
WO (2) WO2025219489A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150157224A1 (en) * 2013-12-05 2015-06-11 Samsung Electronics Co., Ltd. System and Method for Remotely Identifying and Characterizing Life Physiological Signs
US20170209047A1 (en) * 2012-08-01 2017-07-27 Bar Ilan University Method and system for non-invasively monitoring biological or biochemical parameters of individual
US20200099839A1 (en) * 2018-09-20 2020-03-26 ContinUse Biometrics Ltd. Sample inspection utilizing time modulated illumination
US20220067410A1 (en) * 2018-12-28 2022-03-03 Guardian Optical Technologies Ltd System, device, and method for vehicle post-crash support
US20230086376A1 (en) * 2021-02-16 2023-03-23 Health Sensing Co., Ltd. Signal processing apparatus, signal processing system, and signal processing program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10206576B2 (en) * 2014-09-10 2019-02-19 Samsung Electronics Co., Ltd. Laser speckle interferometric system and method for mobile devices
RU2640777C2 (en) * 2016-04-28 2018-01-11 Самсунг Электроникс Ко., Лтд. Autonomous wearable optical device and method for continuous noninvasive measurement of physiological parameters
RU2648029C2 (en) * 2016-08-10 2018-03-21 Самсунг Электроникс Ко., Лтд. Device and method of blood pressure measurement
IL258461A (en) * 2017-04-06 2018-06-28 Continuse Biometrics Ltd System and method for blood pressure measurement
US20250366723A1 (en) * 2022-06-15 2025-12-04 The General Hospital Corporation System for and method of measuring blood pressure non-invasively with light

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170209047A1 (en) * 2012-08-01 2017-07-27 Bar Ilan University Method and system for non-invasively monitoring biological or biochemical parameters of individual
US20150157224A1 (en) * 2013-12-05 2015-06-11 Samsung Electronics Co., Ltd. System and Method for Remotely Identifying and Characterizing Life Physiological Signs
US20200099839A1 (en) * 2018-09-20 2020-03-26 ContinUse Biometrics Ltd. Sample inspection utilizing time modulated illumination
US20220067410A1 (en) * 2018-12-28 2022-03-03 Guardian Optical Technologies Ltd System, device, and method for vehicle post-crash support
US20230086376A1 (en) * 2021-02-16 2023-03-23 Health Sensing Co., Ltd. Signal processing apparatus, signal processing system, and signal processing program

Also Published As

Publication number Publication date
WO2025219489A1 (en) 2025-10-23

Similar Documents

Publication Publication Date Title
Karthick et al. Analysis of vital signs using remote photoplethysmography (RPPG)
JP7261811B2 (en) Systems and methods for non-invasive determination of blood pressure lowering based on trained predictive models
US11445983B2 (en) Non-invasive determination of disease states
Biswas et al. Heart rate estimation from wrist-worn photoplethysmography: A review
US11253202B1 (en) System and method for characterizing cardiac arrhythmia
Balakrishnan et al. Detecting pulse from head motions in video
CN110191675B (en) System and method for contactless determination of blood pressure
US10004410B2 (en) System and methods for measuring physiological parameters
US20150359443A1 (en) Method and system for screening of atrial fibrillation
EP3419511A1 (en) Systems and methods for modified pulse transit time measurement
WO2020160058A1 (en) Systems and methods for computationally efficient non-invasive blood quality measurement
KR20240090322A (en) Identification of biometric information based on cardiac signals
KR102732075B1 (en) Non-face-to-face oxygen saturation measurement device
CN119770021B (en) Physiological signal detection method, system and storage medium for magnetic resonance scanning
Kraft et al. Reliability factor for accurate remote PPG systems
WO2025219540A1 (en) System and method for determining physiological parameters of subject from biophotonic signals using machine learning
Lee et al. Development of deep learning models for motion artifact mitigation in wearable PPG devices
Álvarez Casado Biosignal extraction and analysis from remote video: towards real-world implementation and diagnosis support
Haque et al. Deep long short-term memory (LSTM) network for continuous blood pressure monitoring
Liu et al. Camera-Based Dual-Wavelength Defocused Speckle Imaging for Multi-Point Seismocardiographic Motion Measurement
KR102692075B1 (en) Non-face-to-face blood pressure measuring device
Toley et al. Facial Video Analytics: An Intelligent Approach to Heart Rate Estimation Using AI Framework
Tabei Novel smartphone-based photoplethysmogram signal analysis for health monitoring applications
Maqsood Machine learning approaches for bio-signal analytics to estimate blood pressure
Snizhko et al. Methods for increasing the accuracy of recording the parameters of the cardiovascular system in double-beam photoplethysmography

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25720864

Country of ref document: EP

Kind code of ref document: A1