[go: up one dir, main page]

WO2017051415A1 - A system and method for remotely obtaining physiological parameter of a subject - Google Patents

A system and method for remotely obtaining physiological parameter of a subject Download PDF

Info

Publication number
WO2017051415A1
WO2017051415A1 PCT/IL2016/051047 IL2016051047W WO2017051415A1 WO 2017051415 A1 WO2017051415 A1 WO 2017051415A1 IL 2016051047 W IL2016051047 W IL 2016051047W WO 2017051415 A1 WO2017051415 A1 WO 2017051415A1
Authority
WO
WIPO (PCT)
Prior art keywords
individual
signal
camera
ppg
forehead
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2016/051047
Other languages
French (fr)
Inventor
Dmitry GOLDENBERG
Valeriy BLYUS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sami Shamoon College Of Engineering (rA)
Sensority Ltd
Original Assignee
Sami Shamoon College Of Engineering (rA)
Sensority Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sami Shamoon College Of Engineering (rA), Sensority Ltd filed Critical Sami Shamoon College Of Engineering (rA)
Publication of WO2017051415A1 publication Critical patent/WO2017051415A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/024Measuring pulse rate or heart rate
    • A61B5/02416Measuring pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/0037Performing a preliminary scan, e.g. a prescan for identifying a region of interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms

Definitions

  • the present invention relates to the field of imaging systems. More particularly, the invention relates to a system and method for obtaining physiological parameter of a subject by optical means such as a camera that is located remotely from the subject.
  • a photoplethysmogram is an optically obtained plethysmogram, a volumetric measurement of an organ.
  • a PPG signal is often obtained by using a pulse oximeter which illuminates the skin and measures changes in light absorption.
  • a conventional pulse oximeter monitors the perfusion of blood to the dermis and subcutaneous tissue of the skin. With each cardiac cycle the heart pumps blood to the periphery. Even though this pressure pulse is somewhat damped by the time it reaches the skin, it is enough to distend the arteries and arterioles in the subcutaneous tissue. If the pulse oximeter is attached without compressing the skin, a pressure pulse can also be seen from the venous plexus, as a small secondary peak.
  • a pulse oximeter probe must be physically applied to a person's body, usually to the finger.
  • the present invention relates to a system for detection of a physiological parameter of an individual, comprising at least one camera (e.g., a color digital camera) for capturing images of said individual from a remote location, and a processing unit for analyzing said captured images in order to extract data that represent physiological parameters relative to said individual, wherein said processing unit is adapted for analyzing photoplethysmogram (PPG) signals that reflects changes in volume within an organ or whole body of said individual that result from fluctuations in the amount of blood or air it contains.
  • PPG photoplethysmogram
  • the system further comprises a recording queue module for saving a plurality of frames and a monitoring module adapted for grabbing frame per frame from said recoding queue module.
  • the present invention relates to a method for_remotely obtaining physiological parameter of an individual person, comprising: a) capturing images of said individual from a remote location by using at least one camera; b) processing said captured images in order to extract data that represent physiological parameters relative to said individual, wherein the processing involves analyzing photoplethysmogram (PPG) signals that reflects changes in volume within an organ or whole body of said individual that result from fluctuations in the amount of blood or air it contains.
  • PPG photoplethysmogram
  • the method further comprises: a) Initializing and opening camera stream; b) Initializing a raw signal buffer that is adapted to be populated with the processed data; c) Building queue for recording frames taken from the camera stream; d) Detecting region-of-interest (ROI) in the frames; and e) Capturing data from said detected ROI and starting processing the captured data for the extraction of samples for the creation of a PPG signal.
  • a) Initializing and opening camera stream b) Initializing a raw signal buffer that is adapted to be populated with the processed data
  • ROI region-of-interest
  • the detected ROI is the face and the forehead region of the person.
  • the signal buffer is populated with mean values that represent the forehead region of the person.
  • the processing of the captured image is done in the Green channel of a RGB color model.
  • Fig. 1 schematically illustrates a process of obtaining physiological parameter of a subject in accordance with an embodiment of the present invention
  • Fig. 2 schematically illustrates a recording queue process, according to an embodiment of the invention
  • FIG. 3 schematically illustrates the operating process of a physiological data monitoring module, according to an embodiment of the invention
  • Fig.4 shows an exemplary visual layout of a face and forehead detection
  • Fig. 5 shows a detailed view of an array of pixels that represent the forehead region as detected in Fig. 4;
  • Fig. 6 schematically illustrates an image sampling pipeline for extracting mean value from an array of pixels (in a green channel of a RGB color model based camera) that represents the forehead region of a person;
  • Fig. 7 schematically a signal sampling diagram, according to an embodiment of the present invention.
  • Fig. 8 is a graph that shows a raw PPG discrete time signal
  • Fig. 9 shows Band-Pass filtering on 30 seconds the raw PPG signal
  • FIG. 11 shows on separate portion of PPG signal, the effect of applying spline cubic interpolation
  • Fig. 12 shows the raw PPG signal (top graph), the middle graph represents all frequencies of signal shown at the top graph, and at the bottom graph, after some zooming, the frequency components of the raw PPG signal is shown;
  • Fig. 13 shows frequency components of PPG signal.
  • the program modules describe herein include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular data types adapted for analyzing photoplethysmogram (PPG) signals (e.g., as obtained by one or more cameras), which reflects changes in volume within an organ or whole body of a person that result from fluctuations in the amount of blood or air it contains.
  • PPG photoplethysmogram
  • the invention may be practiced with other computer system configurations while using variety type of cameras, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • Embodiments of the invention may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media.
  • the computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process.
  • the computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process.
  • the functions described herein may be performed by executable code and instructions stored in computer readable medium and running on one or more processor-based systems. However, state machines, and/or hardwired electronic circuits can also be utilized. Further, with respect to the example processes described herein, not all the process states need to be reached, nor do the states have to be performed in the illustrated order. Further, certain process states that are illustrated as being serially performed can be performed in parallel. According to an embodiment of the present invention, the process of analyzing PPG signals may consist of individual modules, each of which has its effect on the performance of the process and the execution of the method as a whole.
  • Fig. 1 schematically illustrates a process of obtaining physiological parameter of a subject in accordance with an embodiment of the present invention.
  • the process may involve the following steps:
  • the ROI can be the face and forehead of a person
  • Recording frames from camera stream (step 16). For example, a monitoring module may grab frame per frame from recorded frames.
  • the process may involve the recognition of the face and forehead of a person, which may serve as an area (i.e., the ROI) for the extraction of samples of the PPG signal.
  • Fig. 2 schematically illustrates an exemplary recording queue process that may save in a memory all frames that are captured by the camera.
  • the recording queue process may involve the following steps: checking whether the queue is full (step 21), if not full, obtaining a frame from the camera (step 22) and storing the obtained frame in the queue (step 23). As shown in the figure, this process may repeat itself during the entire operating session of the system.
  • the system may record all possible frames given by the camera and at the same time (parallel) a data monitoring module takes frame per frame from the recording queue and transforms these frames to the signal samples, the conversion processes will be described herein with respect to Fig. 3.
  • the data monitoring module builds and processes PPG signal.
  • the system adds these samples to signal buffer which contains two other samples of the signal during the last few seconds (e.g., last 10 seconds). Since the extracted signal has a lot of noise, defects and unnecessary frequencies, signal must to be processed and/or filtered.
  • the conversion process as performed by the data monitoring module may involve the following steps:
  • a single frame is indicated by numeral 40 in Figs. 4-7, while a series of frames is indicated by Fl to Fn in Fig. 7;
  • step 32 Acquiring a sample of the forehead of a person (or at least from a partial head view of that person) (step 32).
  • an array of pixels that represents a sample of the forehead of a person that appears in frame 40 is indicated by numeral 42 in Figs. 4-7;
  • a sample of the background is indicated by numeral 43 in Fig. 7;
  • Subtracting the background sample from the forehead sample as to receive a subtracted value (Sn) (step 34).
  • a mean value of the forehead sample is indicated by numeral 72 in Fig. 7 and a corresponding value of the background sample is indicated by numeral 73 in Fig. 7, while the subtracted value of them is represented by numeral 74 in Fig. 7.
  • the raw signal buffer is indicated by numeral 71 in Fig. 7.
  • the signal buffer 71 is populated with substracted values (SI to Sn) over time (as indicated by Tl to Tn);
  • Checking whether the signal buffer is ready i.e., if the buffer size is above a predetermined value). If the buffer is not ready, repeating steps 31 to 35 until the buffer size will be above or at least equal to the predetermined value (step 36);
  • step 37 processing the signal buffer by applying one or more signal processing algorithms to the signal buffer (step 37), such as band-pass filter, median filter and cubic interpolation filter as will be described in further details hereinafter; Applying a Fast Fourier transform (FFT) algorithm to the processed signal buffer as to convert the signal buffer to a representation in the frequency domain (step 38); and
  • FFT Fast Fourier transform
  • the system applies a band-pass filter to extract the frequency bands of interest. For example, we might select frequencies within.
  • the human heart rate is within 24 - 240 beats per minute, corresponding to 0.4 - 4.0 Hz frequency band (see the term Heart Rate Variability (HRV) at https://en.wikipedia.org/wiki/Heart_rate_variability).
  • HRV Heart Rate Variability
  • the system performs some kind of noise reduction on previously band-passed signal, for example, by applying nonlinear Median filter. Due to the limited camera frame rate, signal has low resolution, so a spline cubic interpolation can be applied to increase signal resolution by interpolating missing samples.
  • the system performs Fast Fourier Transform (FFT) to transform the signal into a frequency domain for extracting the heart rate in Beat per Minuets (BPM) and other important physiological features.
  • FFT Fast Fourier Transform
  • an exemplary pseudo code for a single cycle may involve the following tasks:
  • the camera can be any optical instrument capable of recording images, which may be stored locally, transmitted to another location, or both.
  • the camera may include a charge-coupled device (CCD) or other type of sensor to capture images.
  • CCD charge-coupled device
  • the resolution of the captured images affects the performance of the algorithm because the image is a key resource, it is well known that the more size (i.e., higher resolution) the more information is necessary to rework algorithm during execution. So the higher the resolution and size of the image the less Frames per Second (FPS) are captured.
  • a relatively low image capture size can be an image of 320 x 240 pixels.
  • the system may find and process the face - forehead coordinates of at least one person at one execution time. Before starting the processing of the captured data, the system may continuously monitor and record the coordinates of the face and forehead of the at least one person. When recording starts, signal information may obtained from the last recorded coordinates of the face and forehead. Therefore, it is preferred that a subject person shall not be moving the entire execution process of recording a signal.
  • Open source computer vision (OpenCV) library provides fast method of human face detection - Face Detection using Haar Cascades.
  • Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones (herein Viola-Jones method), "Rapid Object Detection using a Boosted Cascade of Simple", Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on (Volume: 1) 2001.
  • Open source computer vision which is a library of programming functions mainly aimed at real-time computer vision, already contains pre-trained classifiers for face detection, so first we need to load the required XML classifiers and do some initialization. Then load our input image in grayscale (e.g., see http://opencv-python- tutroals.readthedocs.io/en/latest/py_tutorials/py_objdetect/py_face_detecti on/py_face_detection . html ).
  • Fig. 4 shows a frame 40 in which the detected face of a person is indicated by an oval line 41 and the forehead region of this person is indicated by a rectangular line 42). If face is found, it returns the positions of detected faces as Rect(x, y, w, h). From this rectangle we can create ROI on input images for further signal sampling.
  • Fig. 5 shows a detailed view of an array of pixels that represent the forehead region as indicated by the rectangular 42 in Fig. 4.
  • Fig. 6 schematically illustrates an example of image sampling pipeline for extracting a mean value (as indicated by numeral 44) from the array of pixels (e.g., in Green channel of the RGB color model) that represent the forehead region within the boundaries of rectangular 42.
  • the mean value of the forehead sample is 156.34123.
  • recorder queue captures all frames issued by the camera and if the sampling frequency is less than the camera fps, we do not lose more than one frame because the monitoring unit after the end of the sampling frame refers to recorder queue for sampling the next frame in the queue.
  • T r is a recording time
  • T d is the time required to sample all the remaining frames in the queue, after finished recording signal. So the total time T t required by system to produce results is:
  • the system When a new image is obtained from the camera, to the system extracts the region of the forehead and convert it into one single sample and add the sample to the signal. Due to the fact that around the subject's forehead there are other items pulsing at different frequencies (e.g. a lamp or a wall) which can be located in frequency bands of interest, the system may apply some preprocessing to compensate the presence of not needed frequencies. So, in the vision of the time series of samples of the signal, the system may need to subtract the background signal from the forehead signal (as described herein with respect to Figs. 3 and 7).
  • frequencies e.g. a lamp or a wall
  • the system extracts the green channel as there is the least noise, and calculates the mean value of all pixels in the green channel.
  • a single value - that represents the forehead sample (as shown in Fig. 6 - the forehead region is indicated by numeral 42 and the mean value is indicated by numeral 44).
  • RGB matrix M and P[r, g, b] ( ; ) are RGB pixels:
  • the resulting mean value describes the overall brightness of the entire region around the face at a given time.
  • background _j>ixels - numpy.asarray (background _j>ixels, dtype numpy.uint8) return background _j>ixels.mean()
  • Buffer represents a window of T seconds and contains the entire sample signals in the last 15 seconds of recording.
  • the static size of buffer is a maximum number of frames that camera can produce in T seconds:
  • Buffer will be filled with samples Sample; of the signal.
  • This buffer represents a raw PPG discrete time signal X[N buffer ] (herein PPG signal).
  • Fig. 7 schematically demonstrates the filling of the RAW signal buffer 71 with the subtracted values (SI to Sn) according to the process described hereinabove with respect to Fig. 3.
  • Fig. 8 shows a graphic representation of a raw PPG discrete time signal.
  • the acquired PPG signal needs some processing before it will be possible to see and retrieve useful information.
  • the system applies a band pass filter with low-cut 0.7 and high-cut 4.0 to get only the frequencies in bands of interest.
  • the band pass filter helps in both removing the DC component, due to face movement or changes in venous pressure, and also high frequency noise.
  • the result of the band pass filtering (on 30 seconds PPG signal) is shown in Fig. 9. Median filtering— Noise reduction.
  • a median filter or any other nonlinear digital filtering technique to remove noise can be applied to perform some kind of noise reduction from the PPG signal.
  • Fig. 10 shows the resolution issue and the improved signal after cubic spline interpolation. This figure shows the comparison between the source PPG signal and the cubic interpolated signal. Looking at the separate portion of the PPG signal (Fig. 11), it can be seen as a cubic interpolation compensates for the low resolution of the signal.
  • PPG signal gives a visual indication of the heart rate variability but for deeper and more extensive studying of the obtained PPG signal is required to switch from time domain to frequency domain.
  • FFT fast Fourier transform
  • DFT discrete Fourier transform
  • Fig. 12 shows the raw PPG signal (top graph), the middle graph represents all frequencies of signal shown at the top graph, and at the bottom graph after some zooming can be seen frequency components of the raw PPG signal. Also, outside the frequency bands of interest can be seen Mayer waves (cyclic changes in arterial blood pressure) at frequency of 0.1Hz.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Cardiology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention relates to a system for detection of a physiological parameter of an individual, comprising at least one camera for capturing images of said individual from a remote location, and a processing unit for analyzing said captured images in order to extract data that represent physiological parameters relative to said individual, wherein said processing unit is adapted for analyzing photoplethysmogram (PPG) signals that reflects changes in volume within an organ or whole body of said individual that result from fluctuations in the amount of blood or air it contains.

Description

A SYSTEM AND METHOD FOR REMOTELY OBTAINING PHYSIOLOGICAL PARAMETER OF A SUBJECT
Field of the Invention
The present invention relates to the field of imaging systems. More particularly, the invention relates to a system and method for obtaining physiological parameter of a subject by optical means such as a camera that is located remotely from the subject.
Background of the invention
A photoplethysmogram (PPG) is an optically obtained plethysmogram, a volumetric measurement of an organ. In the prior-art, a PPG signal is often obtained by using a pulse oximeter which illuminates the skin and measures changes in light absorption. A conventional pulse oximeter monitors the perfusion of blood to the dermis and subcutaneous tissue of the skin. With each cardiac cycle the heart pumps blood to the periphery. Even though this pressure pulse is somewhat damped by the time it reaches the skin, it is enough to distend the arteries and arterioles in the subcutaneous tissue. If the pulse oximeter is attached without compressing the skin, a pressure pulse can also be seen from the venous plexus, as a small secondary peak. However, in order to use the oximeter a pulse oximeter probe must be physically applied to a person's body, usually to the finger.
It is an object of the present invention to provide a system which is capable of detecting physiological parameters of a subject from a remote location, thus eliminating the need to be physically attached to a person's body.
Other objects and advantages of the invention will become apparent as the description proceeds. Summary of the Invention
The present invention relates to a system for detection of a physiological parameter of an individual, comprising at least one camera (e.g., a color digital camera) for capturing images of said individual from a remote location, and a processing unit for analyzing said captured images in order to extract data that represent physiological parameters relative to said individual, wherein said processing unit is adapted for analyzing photoplethysmogram (PPG) signals that reflects changes in volume within an organ or whole body of said individual that result from fluctuations in the amount of blood or air it contains.
According to an embodiment of the invention, the system further comprises a recording queue module for saving a plurality of frames and a monitoring module adapted for grabbing frame per frame from said recoding queue module.
In another aspect, the present invention relates to a method for_remotely obtaining physiological parameter of an individual person, comprising: a) capturing images of said individual from a remote location by using at least one camera; b) processing said captured images in order to extract data that represent physiological parameters relative to said individual, wherein the processing involves analyzing photoplethysmogram (PPG) signals that reflects changes in volume within an organ or whole body of said individual that result from fluctuations in the amount of blood or air it contains.
According to an embodiment of the invention, the method further comprises: a) Initializing and opening camera stream; b) Initializing a raw signal buffer that is adapted to be populated with the processed data; c) Building queue for recording frames taken from the camera stream; d) Detecting region-of-interest (ROI) in the frames; and e) Capturing data from said detected ROI and starting processing the captured data for the extraction of samples for the creation of a PPG signal.
According to an embodiment of the invention, the detected ROI is the face and the forehead region of the person.
According to an embodiment of the invention, the signal buffer is populated with mean values that represent the forehead region of the person.
According to an embodiment of the invention, the processing of the captured image is done in the Green channel of a RGB color model.
Brief Description of the Drawings
In the drawings:
Fig. 1 schematically illustrates a process of obtaining physiological parameter of a subject in accordance with an embodiment of the present invention;
Fig. 2 schematically illustrates a recording queue process, according to an embodiment of the invention;
Fig. 3 schematically illustrates the operating process of a physiological data monitoring module, according to an embodiment of the invention;
Fig.4 shows an exemplary visual layout of a face and forehead detection;
Fig. 5 shows a detailed view of an array of pixels that represent the forehead region as detected in Fig. 4;
Fig. 6 schematically illustrates an image sampling pipeline for extracting mean value from an array of pixels (in a green channel of a RGB color model based camera) that represents the forehead region of a person; Fig. 7 schematically a signal sampling diagram, according to an embodiment of the present invention;
Fig. 8 is a graph that shows a raw PPG discrete time signal;
Fig. 9 shows Band-Pass filtering on 30 seconds the raw PPG signal;
- Fig. 10 shows comparison between source PPG signal and cubic interpolated signal;
- Fig. 11 shows on separate portion of PPG signal, the effect of applying spline cubic interpolation;
Fig. 12 shows the raw PPG signal (top graph), the middle graph represents all frequencies of signal shown at the top graph, and at the bottom graph, after some zooming, the frequency components of the raw PPG signal is shown; and
Fig. 13 shows frequency components of PPG signal.
Detailed Description of the Invention
The following discussion is intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. While the invention will be described in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a personal computer, those skilled in the art will recognize that the invention may also be implemented in combination with other computer systems, such as existing surveillance systems, in particular airport security systems or other regions that require large area surveillance to detect, recognize and track suspicious persons or unauthorized intruders.
The program modules describe herein include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular data types adapted for analyzing photoplethysmogram (PPG) signals (e.g., as obtained by one or more cameras), which reflects changes in volume within an organ or whole body of a person that result from fluctuations in the amount of blood or air it contains. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations while using variety type of cameras, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Embodiments of the invention may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process.
Unless otherwise indicated, the functions described herein may be performed by executable code and instructions stored in computer readable medium and running on one or more processor-based systems. However, state machines, and/or hardwired electronic circuits can also be utilized. Further, with respect to the example processes described herein, not all the process states need to be reached, nor do the states have to be performed in the illustrated order. Further, certain process states that are illustrated as being serially performed can be performed in parallel. According to an embodiment of the present invention, the process of analyzing PPG signals may consist of individual modules, each of which has its effect on the performance of the process and the execution of the method as a whole. Fig. 1 schematically illustrates a process of obtaining physiological parameter of a subject in accordance with an embodiment of the present invention.
The process may involve the following steps:
Initializing and opening camera stream (step 11);
Initializing a raw signal buffer (step 12);
- Building queue for frames record (step 13);
Detecting region-of-interest (ROI) for capturing data. For example, the ROI can be the face and forehead of a person;
- Upon detection of the ROI, capturing data from the ROI and starting processing the captured data for the extraction of samples of the PPG signal (step 15);
- Monitoring physiological data from the processed data (step 17); and
Recording frames from camera stream (step 16). For example, a monitoring module may grab frame per frame from recorded frames.
One of the most important points of the method is to initialize camera settings and issued images as input to the process which subsequently build PPG signal. As described hereinabove, the process may involve the recognition of the face and forehead of a person, which may serve as an area (i.e., the ROI) for the extraction of samples of the PPG signal.
It is assumed that the brightness of optical signal pulses is relative to the amount of oxygen in the blood. Oxygenated blood is bright red and deoxygenated blood is dark red. From study researches conducted before, it was revealed that part of the forehead of a person is more intense flushing. Accordingly, analyzing PPG signals from monitoring the forehead region of a person yielded the best results, and therefore, the method of monitoring physiological data is more effective when working with forehead region, although other regions of the human body can also be monitored (e.g., by capturing images other ROI of the human body with one or more remote cameras).
The implementation of the monitoring modules takes expensive processor's time, which affects the number of frames processed per second. Therefore, the system may not always have time to process all the frames that the camera is able to produce in one second. To avoid potential loss of signal samples, there is a need to store the frames (preferably to save all the frames that are captured by the camera) in a queue from which they will be processed in next. Fig. 2 schematically illustrates an exemplary recording queue process that may save in a memory all frames that are captured by the camera. The recording queue process may involve the following steps: checking whether the queue is full (step 21), if not full, obtaining a frame from the camera (step 22) and storing the obtained frame in the queue (step 23). As shown in the figure, this process may repeat itself during the entire operating session of the system.
The system may record all possible frames given by the camera and at the same time (parallel) a data monitoring module takes frame per frame from the recording queue and transforms these frames to the signal samples, the conversion processes will be described herein with respect to Fig. 3. The data monitoring module builds and processes PPG signal. The system adds these samples to signal buffer which contains two other samples of the signal during the last few seconds (e.g., last 10 seconds). Since the extracted signal has a lot of noise, defects and unnecessary frequencies, signal must to be processed and/or filtered. According to an embodiment of the invention, the conversion process as performed by the data monitoring module may involve the following steps:
Taking a frame from the queue (step 31). A single frame is indicated by numeral 40 in Figs. 4-7, while a series of frames is indicated by Fl to Fn in Fig. 7;
Acquiring a sample of the forehead of a person (or at least from a partial head view of that person) (step 32). For example, an array of pixels that represents a sample of the forehead of a person that appears in frame 40 is indicated by numeral 42 in Figs. 4-7;
- Acquiring a sample of the background (step 33). For example, a sample of the background is indicated by numeral 43 in Fig. 7;
Subtracting the background sample from the forehead sample as to receive a subtracted value (Sn) (step 34). For example, a mean value of the forehead sample is indicated by numeral 72 in Fig. 7 and a corresponding value of the background sample is indicated by numeral 73 in Fig. 7, while the subtracted value of them is represented by numeral 74 in Fig. 7. In this example, the subtracted value is Sn = 47.93342;
- Adding the subtracted sample to a raw signal buffer (step 35). For example, the raw signal buffer is indicated by numeral 71 in Fig. 7. As indicated in Fig. 7, the signal buffer 71 is populated with substracted values (SI to Sn) over time (as indicated by Tl to Tn); Checking whether the signal buffer is ready (i.e., if the buffer size is above a predetermined value). If the buffer is not ready, repeating steps 31 to 35 until the buffer size will be above or at least equal to the predetermined value (step 36);
If the signal buffer is ready, processing the signal buffer by applying one or more signal processing algorithms to the signal buffer (step 37), such as band-pass filter, median filter and cubic interpolation filter as will be described in further details hereinafter; Applying a Fast Fourier transform (FFT) algorithm to the processed signal buffer as to convert the signal buffer to a representation in the frequency domain (step 38); and
Applying feature extraction to the FFT representation of the signal buffer (step 39).
As aforementioned, the steps of the conversion process described hereinabove are also schematically illustrated with respect to the signal sampling diagram of Fig. 7.
At first, the system applies a band-pass filter to extract the frequency bands of interest. For example, we might select frequencies within. The human heart rate is within 24 - 240 beats per minute, corresponding to 0.4 - 4.0 Hz frequency band (see the term Heart Rate Variability (HRV) at https://en.wikipedia.org/wiki/Heart_rate_variability). Next, the system performs some kind of noise reduction on previously band-passed signal, for example, by applying nonlinear Median filter. Due to the limited camera frame rate, signal has low resolution, so a spline cubic interpolation can be applied to increase signal resolution by interpolating missing samples.
At the end, the system performs Fast Fourier Transform (FFT) to transform the signal into a frequency domain for extracting the heart rate in Beat per Minuets (BPM) and other important physiological features.
According to an embodiment of the invention, an exemplary pseudo code for a single cycle may involve the following tasks:
1. Initialize remote camera.
2. Detect the face of a person.
3. Detect the forehead.
4. Record frames (separate thread) 4.1. Getting camera Frame
4.2. Storing frame to queue
5. Getting frame from queue.
5.1. Get signal sample.
5.1.1. Get Forehead region sample
5.1.2. Get Background region sample
5.1.3. Subtract background sample from forehead sample.
5.2. Add sample to signal buffer
5.3. Process signal.
5.3.1. Bund-pass filter.
5.3.2. Median filter.
5.3.3. Cubic interpolation.
5.4. Perform FFT on signal.
5.4.1. Extract features.
6. Analyze extracted features.
7. Display data.
Remote Camera Initialization
The camera can be any optical instrument capable of recording images, which may be stored locally, transmitted to another location, or both. For example, the camera may include a charge-coupled device (CCD) or other type of sensor to capture images. The resolution of the captured images affects the performance of the algorithm because the image is a key resource, it is well known that the more size (i.e., higher resolution) the more information is necessary to rework algorithm during execution. So the higher the resolution and size of the image the less Frames per Second (FPS) are captured. For example, a relatively low image capture size can be an image of 320 x 240 pixels. Detect (Region of Interest) ROI for signal capture
According to an embodiment of the invention, the system may find and process the face - forehead coordinates of at least one person at one execution time. Before starting the processing of the captured data, the system may continuously monitor and record the coordinates of the face and forehead of the at least one person. When recording starts, signal information may obtained from the last recorded coordinates of the face and forehead. Therefore, it is preferred that a subject person shall not be moving the entire execution process of recording a signal.
Detection of face during the session of signal recording and of data processing, can dramatically reduce the performance of the system. Due to this fact, the coordinates of face - forehead region may be fixed for further work with this ROI.
Face detection
Open source computer vision (OpenCV) library provides fast method of human face detection - Face Detection using Haar Cascades. Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones (herein Viola-Jones method), "Rapid Object Detection using a Boosted Cascade of Simple", Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on (Volume: 1) 2001.
It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images.
Open source computer vision (OpenCV), which is a library of programming functions mainly aimed at real-time computer vision, already contains pre-trained classifiers for face detection, so first we need to load the required XML classifiers and do some initialization. Then load our input image in grayscale (e.g., see http://opencv-python- tutroals.readthedocs.io/en/latest/py_tutorials/py_objdetect/py_face_detecti on/py_face_detection . html ).
Before applying Viola-Jones method we have to convert 16-bit / 32-bit RGB frame to 8-bit grayscale frame. The conversion will have some information lose but it will save the execution time of Viola- Jones method.
Suppose the frame represented as RGB color matrix M[i] [j] with i columns and j rows:
Figure imgf000014_0001
Where R, G, and B are integers between 0 and 255. The grayscale weighted average, X for each M[i] [j] is given by the formula:
RGB(M [i] [j])→ Gray: x = 0.299Λ + 0.587G + 0.1145
As the last step before face detection, we apply histogram equalization to improve the contrast in an image. To accomplish the equalization effect the remapping between one distribution (histogram) to another distribution should be the cumulative distribution function. For the histogram H(i), its cumulative distribution H'(i) is:
Figure imgf000014_0002
Finally, we use a simple remapping procedure to obtain the intensity values of the equalized image: exualized(i,j) = H'( [i] [/])
Now we can find the face in the image (see Fig. 4 that shows a frame 40 in which the detected face of a person is indicated by an oval line 41 and the forehead region of this person is indicated by a rectangular line 42). If face is found, it returns the positions of detected faces as Rect(x, y, w, h). From this rectangle we can create ROI on input images for further signal sampling. Fig. 5 shows a detailed view of an array of pixels that represent the forehead region as indicated by the rectangular 42 in Fig. 4. Fig. 6 schematically illustrates an example of image sampling pipeline for extracting a mean value (as indicated by numeral 44) from the array of pixels (e.g., in Green channel of the RGB color model) that represent the forehead region within the boundaries of rectangular 42. In this example, the mean value of the forehead sample is 156.34123.
Forehead detection
Based on the coordinates of the face we can easily calculate the coordinates of the forehead without applying some methods. Received the forehead region, we focus on a small amount of data which is clearly visible pulsation of pixels brightness. For example, see "Remote plethysmographic imaging using ambient light", Wim Verkruysse, et. al. Opt Express. 2008 Dec 22; 16(26): 21434-21445.
Given face coordinates represented as rectangle:
Tect ace — { Xface, Yface> ^face> Hface} easily can calculate approximated forehead coordinates as rectangle: forehead = w f,ace * 0.25
Figure imgf000016_0001
X forehead ~ (Xface + (Wface * 0.5))— (Wforehead
Yforehead ~ (Jface + ( face * 0.15))— {Hforehead
So we get final forehead coordinates as rectangle:
Figure imgf000016_0002
Frames record queue.
As mentioned earlier, recorder queue captures all frames issued by the camera and if the sampling frequency is less than the camera fps, we do not lose more than one frame because the monitoring unit after the end of the sampling frame refers to recorder queue for sampling the next frame in the queue.
Determine the size QSize of the recording Queue for maximum utilization of all possible sampling frames:
Qssii.ze = (7V * FPS) - (7V * s)
Where Tr is a recording time, FPS is a number of frames per second issued by camera and Fs is the average number of samples obtained in one second, thus Fs = 1/T while T is a constant of time interval between samples. Assuming that FPS > Fs we can get delay time of our sampling system:
Figure imgf000017_0001
Td is the time required to sample all the remaining frames in the queue, after finished recording signal. So the total time Tt required by system to produce results is:
Tt = Tr + Td Extracting and sampling signal data
When a new image is obtained from the camera, to the system extracts the region of the forehead and convert it into one single sample and add the sample to the signal. Due to the fact that around the subject's forehead there are other items pulsing at different frequencies (e.g. a lamp or a wall) which can be located in frequency bands of interest, the system may apply some preprocessing to compensate the presence of not needed frequencies. So, in the vision of the time series of samples of the signal, the system may need to subtract the background signal from the forehead signal (as described herein with respect to Figs. 3 and 7).
Sampling forehead region
From the region of forehead (as indicated by the rectangular 42 in Fig. 4), the system extracts the green channel as there is the least noise, and calculates the mean value of all pixels in the green channel. In the end we get a single value - that represents the forehead sample (as shown in Fig. 6 - the forehead region is indicated by numeral 42 and the mean value is indicated by numeral 44). Given forehead ROI F represented by RGB matrix M and P[r, g, b](; ) are RGB pixels:
P[r, #, b](0,o) "· P[r, g, b](Q )
½ =
P[r, g, b] ■■■ P[r, g, b](i )
By using only green channel we extract from all pixels P[r, g, b](£ ) # values and getting new matrix from with we calculate the average value of all green values: i J
(i * ;')
x=0 k=0
Sampling background region
From the original image we calculate the mean value of all pixels around the face region. The resulting mean value describes the overall brightness of the entire region around the face at a given time.
The following is an exemplary code for extracting background region:
def get_background_sample ( self, frame):
face_rect = self. face. get _face()
#convert Ipllmage to numpy array
frame - numpy. asarray(frame[:,:], dtype=numpy .uint8 ) cols = frame.shapefO]
rows - frame. shapefl]
# find pixel outside face region background _j>ixels - []
for i in range(cols):
for j in range(rows):
if self.isInRectd, j, face_rect) == False:
background _j>ixels.append(frame[i, j][l ])
# get mean value (from green channel)
background _j>ixels - numpy.asarray (background _j>ixels, dtype=numpy.uint8) return background _j>ixels.mean()
From the exemplary code above it can be seen that we extract all background pixels into 1 dimensional array background pixels with size N and calculates Bmean -background mean value as:
N
Br, ^ background pixels[x]
x=0
Subtracting in time domain
After extracting the mean value of luminance of pixels in both regions, Fmean and Bmean (Forehead and background), we subtract them from each other to get final mean value Samplevaiue of our sample:
Samplevaiue— \Fmean— Bmean\
Finally we build signal sample as a pair of Samplevalue and Sampletjme (time in which a frame entered in the queue):
Samplei = Sampletime, Samplevalue) Filling a RAW signal buffer
Buffer represents a window of T seconds and contains the entire sample signals in the last 15 seconds of recording. The static size of buffer is a maximum number of frames that camera can produce in T seconds:
Nbuffer = T * FPS
Buffer will be filled with samples Sample; of the signal. This buffer represents a raw PPG discrete time signal X[Nbuffer] (herein PPG signal). Fig. 7 schematically demonstrates the filling of the RAW signal buffer 71 with the subtracted values (SI to Sn) according to the process described hereinabove with respect to Fig. 3. Fig. 8 shows a graphic representation of a raw PPG discrete time signal.
Signal Processing
The acquired PPG signal needs some processing before it will be possible to see and retrieve useful information. We apply a band-pass filter to extract the frequency bands of interest (frequency band of HRV). Then we need to apply noise reduction and interpolate the PPG signal to increase the signal sampling rate.
Band-Pass filtering
According to an embodiment of the invention, the system applies a band pass filter with low-cut 0.7 and high-cut 4.0 to get only the frequencies in bands of interest. The band pass filter helps in both removing the DC component, due to face movement or changes in venous pressure, and also high frequency noise. The result of the band pass filtering (on 30 seconds PPG signal) is shown in Fig. 9. Median filtering— Noise reduction.
In case of small SNR (Signal to noise ratio) value, a median filter or any other nonlinear digital filtering technique to remove noise can be applied to perform some kind of noise reduction from the PPG signal.
Interpolation
One of the biggest issues for HRV analysis is the low resolution of the signal, due to the limited frame rate we can obtain from simple camera. Applying a cubic spline interpolation may increase the number of samples to 50 times (see http://docs.scipy.org/doc/scipy-
0.14.0/reference/tutorial/interpolate.html#id5). This is fast enough to increase the resolution of the signal.
Having a discrete time signal X[n] with knots {(χι, η ) · i = 0, 1, ... , n} we interpolate between all the pairs of knots (xi-1, ni_1) and (x^ ni) with polynomials: n = qi(x , i = 1, 2, ... , n
The curvature of a curve x = {x[n]} is given
Figure imgf000021_0001
As the spline will take a shape that minimizes the bending both n' and n" will be continuously everywhere and at the knots. To achieve this one must have that:
Figure imgf000021_0002
1 < t < n - 1
<7;'Oi) = <7;+iOi) Fig. 10 shows the resolution issue and the improved signal after cubic spline interpolation. This figure shows the comparison between the source PPG signal and the cubic interpolated signal. Looking at the separate portion of the PPG signal (Fig. 11), it can be seen as a cubic interpolation compensates for the low resolution of the signal.
FFT performing and features extracting
PPG signal gives a visual indication of the heart rate variability but for deeper and more extensive studying of the obtained PPG signal is required to switch from time domain to frequency domain.
A fast Fourier transform (FFT) algorithm allows us to compute the discrete Fourier transform (DFT). FFT converts the signal to the frequency domain and allows you to see in detail all the frequency components of the signal.
Before applying the band-pass filter, you can see a lot of other frequencies of which is RAW signal composed, many of them are the result of the radiation characteristics of foreign objects on the background of the observed subject. Fig. 12 shows the raw PPG signal (top graph), the middle graph represents all frequencies of signal shown at the top graph, and at the bottom graph after some zooming can be seen frequency components of the raw PPG signal. Also, outside the frequency bands of interest can be seen Mayer waves (cyclic changes in arterial blood pressure) at frequency of 0.1Hz.
BPM extracting
In the frequency-domain, HRV is described as the sum of elementary oscillatory components defined by their frequency and amplitude (power). Once the FFT is computed for the current sliding window contents, magnitude bins (Peaks) in the interest band are spotted. Index i, corresponding to the maximum of the power spectrum. To find Index i we used simple max function that detects maximum bin. Now we should convert the Index i into a frequency value F, as follow: i * FPS
F =
Where N is the size of the extracted signal. Finally we convert F to beats per minute:
BPM = F * 60
In other words we are getting the frequency in bpm that corresponds to the highest peak, as shown in Fig. 13. At the top graph we can see the band-passed PPG signal in bands of 0.7 - 4.0 Hz, and at the bottom graph the frequency components of this signal with a peak at red circle is shown.
The terms, "for example", "e.g.", "optionally", as used hereinabove, were intended to be used to introduce non-limiting examples.
All the above description and examples have been given for the purpose of illustration and are not intended to limit the invention in any way. Many different computer systems, methods of analysis, electronic and optical elements can be employed, all without exceeding the scope of the invention.

Claims

1. A system for detection of a physiological parameter of an individual, comprising at least one camera for capturing images of said individual from a remote location, and a processing unit for analyzing said captured images in order to extract data that represent physiological parameters relative to said individual, wherein said processing unit is adapted for analyzing photoplethysmogram (PPG) signals that reflects changes in volume within an organ or whole body of said individual that result from fluctuations in the amount of blood or air it contains.
2. A system according to claim 1, further comprising a recording queue module for saving a plurality of frames and a monitoring module adapted for grabbing frame per frame from said recoding queue module.
3. A method for_remotely obtaining physiological parameter of an individual person, comprising:
a) capturing images of said individual from a remote location by using at least one camera;
b) processing said captured images in order to extract data that represent physiological parameters relative to said individual, wherein the processing involves analyzing photoplethysmogram (PPG) signals that reflects changes in volume within an organ or whole body of said individual that result from fluctuations in the amount of blood or air it contains.
4. A method according to claim 3, further comprising:
a) Initializing and opening camera stream; b) Initializing a raw signal buffer that is adapted to be populated with the processed data;
c) Building queue for recording frames taken from the camera stream;
d) Detecting region-of-interest (ROI) in the frames; and e) Capturing data from said detected ROI and starting processing the captured data for the extraction of samples for the creation of a PPG signal.
5. A method according to claim 4, wherein the detected ROI is the face and the forehead region of the person.
6. A method according to claim 4, wherein the signal buffer is populated with mean values that represent the forehead region of the person.
7. A method according to claim 3, wherein the processing of the captured image is done in the Green channel of a RGB color model.
PCT/IL2016/051047 2015-09-21 2016-09-21 A system and method for remotely obtaining physiological parameter of a subject Ceased WO2017051415A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562221193P 2015-09-21 2015-09-21
US62/221,193 2015-09-21

Publications (1)

Publication Number Publication Date
WO2017051415A1 true WO2017051415A1 (en) 2017-03-30

Family

ID=58386405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2016/051047 Ceased WO2017051415A1 (en) 2015-09-21 2016-09-21 A system and method for remotely obtaining physiological parameter of a subject

Country Status (1)

Country Link
WO (1) WO2017051415A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706220A (en) * 2019-09-27 2020-01-17 贵州大学 Capsule endoscope image processing and analyzing method
WO2024056087A1 (en) * 2022-09-16 2024-03-21 Jing Wei Chin Method and device for camera-based heart rate variability monitoring

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150236740A1 (en) * 2012-11-02 2015-08-20 Koninklijke Philips N.V. Device and method for extracting physiological information

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150236740A1 (en) * 2012-11-02 2015-08-20 Koninklijke Philips N.V. Device and method for extracting physiological information

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706220A (en) * 2019-09-27 2020-01-17 贵州大学 Capsule endoscope image processing and analyzing method
CN110706220B (en) * 2019-09-27 2023-04-18 贵州大学 Capsule endoscope image processing and analyzing method
WO2024056087A1 (en) * 2022-09-16 2024-03-21 Jing Wei Chin Method and device for camera-based heart rate variability monitoring

Similar Documents

Publication Publication Date Title
EP2486539B1 (en) Method and system for obtaining a first signal for analysis to characterize at least one periodic component thereof
AU2016201690B2 (en) Method and system for noise cleaning of photoplethysmogram signals
CN102549621B (en) For processing the method and system of the signal of the component including at least the periodic phenomena represented in biology
Hsu et al. A deep learning framework for heart rate estimation from facial videos
KR102285999B1 (en) Heart rate estimation based on facial color variance and micro-movement
CN102549620A (en) Formation of a time-varying signal representative of at least variations in a value based on pixel values
US9737219B2 (en) Method and associated controller for life sign monitoring
EP4066736B1 (en) Heart rate estimation method and apparatus, and electronic device applying same
CN106491117A (en) A kind of signal processing method and device based on PPG heart rate measurement technology
Huang et al. A motion-robust contactless photoplethysmography using chrominance and adaptive filtering
Yin et al. Heart rate estimation based on face video under unstable illumination
Chambino Android-based implementation of Eulerian Video Magnification for vital signs monitoring
Abdulrahaman Two-stage motion artifact reduction algorithm for rPPG signals obtained from facial video recordings
Fukunishi et al. Improvements in remote video based estimation of heart rate variability using the Welch FFT method
He et al. Remote photoplethysmography heart rate variability detection using signal to noise ratio bandpass filtering
Wu et al. Camera-based heart rate measurement using continuous wavelet transform
Nakonechnyi et al. Estimation of heart rate and its variability based on wavelet analysis of photoplethysmographic signals in real time
CN118247708A (en) A non-contact heart rate detection method for removing irregular motion artifacts
KR20250105623A (en) Unsupervised non-contrastive learning of physiological signals from videos
Nowara et al. Combining magnification and measurement for non-contact cardiac monitoring
WO2017051415A1 (en) A system and method for remotely obtaining physiological parameter of a subject
Ben Salah et al. Contactless heart rate estimation from facial video using skin detection and multi-resolution analysis
Kayani et al. Pulse rate extraction based on local-area motion magnification
Das et al. A multiresolution method for non-contact heart rate estimation using facial video frames
Yao et al. Research on heart rate extraction method based on mobile phone video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16848268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16848268

Country of ref document: EP

Kind code of ref document: A1