[go: up one dir, main page]

US12343177B2 - Video based detection of pulse waveform - Google Patents

Video based detection of pulse waveform Download PDF

Info

Publication number
US12343177B2
US12343177B2 US17/591,929 US202217591929A US12343177B2 US 12343177 B2 US12343177 B2 US 12343177B2 US 202217591929 A US202217591929 A US 202217591929A US 12343177 B2 US12343177 B2 US 12343177B2
Authority
US
United States
Prior art keywords
video stream
frames
pulse waveform
sequence
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/591,929
Other versions
US20220240865A1 (en
Inventor
Jeremy Speth
Patrick Flynn
Adam Czajka
Kevin Bowyer
Nathan CARPENTER
Leandro Olie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Notre Dame
Securiport LLC
Original Assignee
University of Notre Dame
Securiport LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Notre Dame, Securiport LLC filed Critical University of Notre Dame
Priority to PCT/IB2022/050960 priority Critical patent/WO2022167979A1/en
Priority to US17/591,929 priority patent/US12343177B2/en
Priority to TNP/2023/000194A priority patent/TN2023000194A1/en
Publication of US20220240865A1 publication Critical patent/US20220240865A1/en
Assigned to ALTER DOMUS (US) LLC, AS COLLATERAL AGENT reassignment ALTER DOMUS (US) LLC, AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SECURIPORT LIMITED LIABILITY COMPANY
Assigned to UNIVERSITY OF NOTRE DAME DU LAC reassignment UNIVERSITY OF NOTRE DAME DU LAC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CZAJKA, ADAM, SPETH, Jeremy, BOWYER, KEVIN, FLYNN, PATRICK
Assigned to SECURIPORT LLC reassignment SECURIPORT LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CARPENTER, Nathan, OLIE, Leandro
Application granted granted Critical
Publication of US12343177B2 publication Critical patent/US12343177B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7278Artificial waveform generation or derivation, e.g. synthesizing signals from measured signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/024Measuring pulse rate or heart rate
    • A61B5/02416Measuring pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/024Measuring pulse rate or heart rate
    • A61B5/02405Determining heart rate variability
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/141Discrete Fourier transforms
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Definitions

  • Objects of the present invention provide systems, devices, methods, and computer-readable instructions that enable accurate capture of a pulse waveform without physical contact and with minimal constraints on the subject's movement and position.
  • the video based detection of pulse waveform includes systems, devices, methods, and computer-readable instructions for capturing a video stream including a sequence of frames, processing each frame of the video stream to spatially locate a region of interest, cropping each frame of the video stream to encapsulate the region of interest, processing the sequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence of frames and to produce a pulse waveform point for each frame of the sequence of frames, and generating a time series of pulse waveform points to generate the pulse waveform of the subject for the sequence of frames.
  • FIG. 2 illustrates a computer-implemented method for generating a pulse waveform according to an example embodiment of the present invention.
  • Embodiments of user interfaces and associated methods for using a device are described. It should be understood, however, that the user interfaces and associated methods can be applied to numerous devices types, such as a portable communication device such as a tablet or mobile phone.
  • the portable communication device can support a variety of applications, such as wired or wireless communications.
  • the various applications that can be executed on the device can use at least one common physical user-interface device, such as a touchscreen.
  • One or more functions of the touchscreen as well as corresponding information displayed on the device can be adjusted and/or varied from one application to another and/or within a respective application. In this way, a common physical architecture of the device can support a variety of applications with user interfaces that are intuitive and transparent.
  • the embodiments of the present invention provide systems, devices, methods, and computer-readable instructions to measure one or more biometrics, including heart-rate and pulse waveform, without physical contact with the subject.
  • the systems, devices, methods, and instructions collect, process, and analyze video taken in one or more modalities (e.g., visible light, near infrared, thermal, etc.) to produce an accurate pulse waveform for the subject's heartbeat from a distance without constraining the subject's movement or posture.
  • the pulse waveform for the subject's heartbeat may be used as a biometric input to establish features of the physical state of the subject and how they change over a period of observation (e.g., during questioning or other activity).
  • Remote photoplethysmography is the monitoring of blood volume pulse from a camera at a distance.
  • blood volume pulse from video at a distance from the skin's surface may be detected.
  • the embodiments of the invention provide an estimate of the blood volume to generate a pulse waveform from a video of one or more subjects at a distance from a camera sensor. Additional diagnostics can be extracted from the pulse waveform such as heart rate (beats per minute) and heart rate variability to further assess the physiological state of the subject.
  • the heart rate is a concise description of the dominant frequency in the blood volume pulse, represented in beats per minute (bpm), where one beat is equivalent to one cycle.
  • the embodiments of the present invention process the spatial and the temporal dimensions of video stream data using a 3-dimensional convolutional neural network (3DCNN).
  • the main advantage of using 3-dimensional kernels within the 3DCNN is the empirical robustness to movement, talking, and a general lack of constraints on the subject. Additionally, the embodiments provide concise techniques in which the 3DCNN is given a sequence of images and produces a discrete waveform with a real value for every frame. While an existing work has deployed a 3DCNN for pulse detection (Yu 2019), the embodiments of the present invention significantly improve the model by modifying the temporal dimension of the 3D kernels with dilations as a function of their depth within the 3DCNN. As a result, a significant improvement in heart rate estimation without increasing the model size or computational requirements is achieved.
  • Another advantage of the embodiments of the present invention over existing methods is the ability to estimate reliable pulse waveforms rather than relying on long-term descriptions of the signal.
  • Many existing approaches use handcrafted features.
  • the embodiments utilize one or more large sets of data. Existing approaches were validated by comparing their estimated heart rate to the subject's physically measured heart rate, which is only a description of the frequency of a signal over long time intervals.
  • the embodiments were optimized and validated over short time intervals (e.g., video streams less than 10 seconds, video streams less than 5 seconds, video streams less than 3 seconds) to produce reliable estimates of the pulse waveform rather than a single frequency or heartrate value, which enables further extraction of information to better understand the subject's physiological state.
  • FIG. 1 illustrates a system 100 for pulse waveform estimation according to an example embodiment of the present invention.
  • System 100 includes optical sensor system 1 , video I/O system 6 , and video processing system 101 .
  • Optical sensor system 1 includes one or more camera sensors, each respective camera sensor configured to capture a video stream including a sequence of frames.
  • optical sensor system 1 may include a visible-light camera 2 , a near-infrared camera 3 , a thermal camera 4 , or any combination thereof.
  • the resulting multiple video streams may be synchronized according to synchronization device 5 .
  • one or more video analysis techniques may be utilized to synchronize the video streams.
  • Video I/O system 6 receives the captured one or more video streams.
  • video I/O system 6 is configured to receive raw visible-light video stream 7 , near-infrared video stream 8 , and thermal video stream 9 from optical sensor system 1 .
  • the received video streams may be stored according to known digital format(s).
  • fusion processor 10 is configured to combine the received video streams.
  • fusion processor 10 may combine visible-light video stream 7 , near-infrared video stream 8 , and/or thermal video stream 9 into a fused video stream 11 .
  • the respective streams may be synchronized according to the output (e.g., a clock signal) from synchronization device 5 .
  • region of interest detector 12 detects (i.e., spatially locate) one or more spatial regions of interest (ROI) within each video frame.
  • the ROI may be a face, another body part (e.g., a hand, an arm, a foot, a neck, etc.) or any combination of body parts.
  • region of interest detector 12 determines one or more coarse spatial ROIs within each video frame.
  • Region of interest detector 12 is robust to strong facial occlusions from face masks and other head garments.
  • frame preprocessor 13 crops the frame to encapsulate the one or more ROI.
  • the cropping includes each frame being downsized by bi-cubic interpolation to reduce the number of image pixels to be processed. Alternatively, or additionally, the cropped frame may be further resized to a smaller image.
  • Sequence preparation system 14 aggregates batches of ordered sequences or subsequences of frames from frame processer 13 to be processed.
  • 3-Dimensional Convolutional Neural Network (3DCNN) 15 receives the sequence or subsequence of frames from the sequence preparation system 14 .
  • 3DCNN 15 processes the sequence or subsequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence or subsequence of frames and to produce a pulse waveform point for each frame of the sequence of frames.
  • 3DCNN 15 applies a series of 3-dimensional convolution, averaging, pooling, and nonlinearities to produce a 1-dimensional signal approximating the pulse waveform 16 for the input sequence or subsequences.
  • pulse aggregation system 17 combines any number of pulse waveforms 16 from the sequences or subsequences of frames into an aggregated pulse waveform 18 to represent the entire video stream.
  • Diagnostic extractor 19 is configured to compute the heart rate and the heart rate variability from the aggregated pulse waveform 18 . To identify heart rate variability, the calculated heart rate of various subsequences may be compared.
  • Display unit 20 receives real-time or near real-time updates from diagnostic extractor 19 and displays aggregated pulse waveform 18 , heart rate, and heart rate variability to an operator.
  • Storage Unit 21 is configured to store aggregated pulse waveform 18 , heart rate, and heart rate variability associated with the subject.
  • the sequence of frames may be partitioned into a partially overlapping subsequences within the sequence preparation system 14 , wherein a first subsequence of frames overlaps with a second subsequence of frames.
  • the overlap in frames between subsequences prevents edge effects.
  • pulse aggregation system 17 may apply a Hann function to each subsequence, and the overlapping subsequences added to generate aggregated pulse waveform 18 with the same number of samples as frames in the original video stream.
  • each subsequence is individually passed to the 3DCNN 15 , which performs a series of operations to produce a pulse waveform for each subsequence 16 .
  • Each pulse waveform output from the 3DCNN 15 is a time series with a real value for each video frame. Since each subsequence is processed by the 3DCNN 15 individually, they are subsequently recombined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Public Health (AREA)
  • Mathematical Physics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Human Computer Interaction (AREA)
  • Discrete Mathematics (AREA)
  • Algebra (AREA)

Abstract

The video based detection of pulse waveform includes systems, devices, methods, and computer-readable instructions for capturing a video stream including a sequence of frames, processing each frame of the video stream to spatially locate a region of interest, cropping each frame of the video stream to encapsulate the region of interest, processing the sequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence of frames and to produce a pulse waveform point for each frame of the sequence of frames, and generating a time series of pulse waveform points to generate the pulse waveform of the subject for the sequence of frames.

Description

PRIORITY INFORMATION
This application claims the benefits of U.S. Provisional Patent Application No. 63/145,140, filed on Feb. 3, 2021, which is incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTION Field of the Invention
The embodiments of the present invention generally relate to use of biometrics, and more particularly, to video based detection of pulse waveform and/or heart rate for a subject.
Discussion of the Related Art
In general, biometrics may be used to track vital signs that provide indicators about a subject's physical state that may be used in a variety of ways. As an example, for border security or health monitoring, vital signs may be used to screen for health risks (e.g., temperature). While sensing temperature is a well-developed technology, collecting other useful and accurate vital signs such as pulse rate (i.e., heart rate or heart beats per minute) or pulse waveform has required physical devices to be attached to the subject. The desire to perform this measurement without physical contact has produced some video based techniques, however, these are generally limited in accuracy, require control of the subject's posture, and/or require a close positioning of the camera.
Performing reliable pulse rate or pulse waveform estimation from a camera sensor is more difficult than contact plethysmography for several reasons. The change in reflected light from the skin's surface, because of light absorption of blood, is very minor compared to those caused by changes in illumination. Even in settings with ambient lighting, the subject's movements drastically change the reflected light and overpower the pulse signal.
Existing approaches to remote pulse estimation operate on the spatial and temporal dimensions separately. Typically, the spatial region of interest containing skin is converted to a single or few values for each frame independently, followed by processing over the temporal dimension to produce a pulse waveform. While this is effective for stationary subjects, it presents difficulties when the subject moves (e.g., talks). Examples of independent analysis of the spatial and temporal dimensions include independent component analysis (Poh 2010, Poh 2011), chrominance analysis (De Haan 2013), and plane orthogonal to skin (Wang 2017).
Accordingly, the inventors have developed systems, devices, methods, and computer-readable instructions that enable accurate capture of a pulse waveform without physical contact and with minimal constraints on the subject's movement and position.
SUMMARY OF THE INVENTION
Accordingly, the present invention is directed to a video based detection of pulse waveform that substantially obviates one or more problems due to limitations and disadvantages of the related art.
Objects of the present invention provide systems, devices, methods, and computer-readable instructions that enable accurate capture of a pulse waveform without physical contact and with minimal constraints on the subject's movement and position.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described, the video based detection of pulse waveform includes systems, devices, methods, and computer-readable instructions for capturing a video stream including a sequence of frames, processing each frame of the video stream to spatially locate a region of interest, cropping each frame of the video stream to encapsulate the region of interest, processing the sequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence of frames and to produce a pulse waveform point for each frame of the sequence of frames, and generating a time series of pulse waveform points to generate the pulse waveform of the subject for the sequence of frames.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 illustrates a system for pulse waveform estimation according to an example embodiment of the present invention.
FIG. 2 illustrates a computer-implemented method for generating a pulse waveform according to an example embodiment of the present invention.
FIG. 3 illustrates a video based application for generating a pulse waveform according to an example embodiment of the present invention.
FIG. 4 illustrates an exponentially increasing dilation rate as a function of network depth.
DETAILED DESCRIPTION OF THE INVENTION
Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, like reference numbers will be used for like elements.
Embodiments of user interfaces and associated methods for using a device are described. It should be understood, however, that the user interfaces and associated methods can be applied to numerous devices types, such as a portable communication device such as a tablet or mobile phone. The portable communication device can support a variety of applications, such as wired or wireless communications. The various applications that can be executed on the device can use at least one common physical user-interface device, such as a touchscreen. One or more functions of the touchscreen as well as corresponding information displayed on the device can be adjusted and/or varied from one application to another and/or within a respective application. In this way, a common physical architecture of the device can support a variety of applications with user interfaces that are intuitive and transparent.
The embodiments of the present invention provide systems, devices, methods, and computer-readable instructions to measure one or more biometrics, including heart-rate and pulse waveform, without physical contact with the subject. In the various embodiments, the systems, devices, methods, and instructions collect, process, and analyze video taken in one or more modalities (e.g., visible light, near infrared, thermal, etc.) to produce an accurate pulse waveform for the subject's heartbeat from a distance without constraining the subject's movement or posture. The pulse waveform for the subject's heartbeat may be used as a biometric input to establish features of the physical state of the subject and how they change over a period of observation (e.g., during questioning or other activity).
Remote photoplethysmography (rPPG) is the monitoring of blood volume pulse from a camera at a distance. Using rPPG, blood volume pulse from video at a distance from the skin's surface may be detected. The embodiments of the invention provide an estimate of the blood volume to generate a pulse waveform from a video of one or more subjects at a distance from a camera sensor. Additional diagnostics can be extracted from the pulse waveform such as heart rate (beats per minute) and heart rate variability to further assess the physiological state of the subject. The heart rate is a concise description of the dominant frequency in the blood volume pulse, represented in beats per minute (bpm), where one beat is equivalent to one cycle.
The embodiments of the present invention (concurrently, simultaneously, in-parallel, etc.) process the spatial and the temporal dimensions of video stream data using a 3-dimensional convolutional neural network (3DCNN). The main advantage of using 3-dimensional kernels within the 3DCNN is the empirical robustness to movement, talking, and a general lack of constraints on the subject. Additionally, the embodiments provide concise techniques in which the 3DCNN is given a sequence of images and produces a discrete waveform with a real value for every frame. While an existing work has deployed a 3DCNN for pulse detection (Yu 2019), the embodiments of the present invention significantly improve the model by modifying the temporal dimension of the 3D kernels with dilations as a function of their depth within the 3DCNN. As a result, a significant improvement in heart rate estimation without increasing the model size or computational requirements is achieved.
Another advantage of the embodiments of the present invention over existing methods is the ability to estimate reliable pulse waveforms rather than relying on long-term descriptions of the signal. Many existing approaches use handcrafted features. By contrast, the embodiments utilize one or more large sets of data. Existing approaches were validated by comparing their estimated heart rate to the subject's physically measured heart rate, which is only a description of the frequency of a signal over long time intervals. By contrast, the embodiments were optimized and validated over short time intervals (e.g., video streams less than 10 seconds, video streams less than 5 seconds, video streams less than 3 seconds) to produce reliable estimates of the pulse waveform rather than a single frequency or heartrate value, which enables further extraction of information to better understand the subject's physiological state.
FIG. 1 illustrates a system 100 for pulse waveform estimation according to an example embodiment of the present invention. System 100 includes optical sensor system 1, video I/O system 6, and video processing system 101.
Optical sensor system 1 includes one or more camera sensors, each respective camera sensor configured to capture a video stream including a sequence of frames. For example, optical sensor system 1 may include a visible-light camera 2, a near-infrared camera 3, a thermal camera 4, or any combination thereof. In the event that multiple camera sensors are utilized (e.g., single modality or multiple modality), the resulting multiple video streams may be synchronized according to synchronization device 5. Alternatively, or additionally, one or more video analysis techniques may be utilized to synchronize the video streams.
Video I/O system 6 receives the captured one or more video streams. For example, video I/O system 6 is configured to receive raw visible-light video stream 7, near-infrared video stream 8, and thermal video stream 9 from optical sensor system 1. Here, the received video streams may be stored according to known digital format(s). In the event that multiple video streams are received (e.g., single modality or multiple modality), fusion processor 10 is configured to combine the received video streams. For example, fusion processor 10 may combine visible-light video stream 7, near-infrared video stream 8, and/or thermal video stream 9 into a fused video stream 11. Here, the respective streams may be synchronized according to the output (e.g., a clock signal) from synchronization device 5.
At video processing system 101, region of interest detector 12 detects (i.e., spatially locate) one or more spatial regions of interest (ROI) within each video frame. The ROI may be a face, another body part (e.g., a hand, an arm, a foot, a neck, etc.) or any combination of body parts. Initially, region of interest detector 12 determines one or more coarse spatial ROIs within each video frame. Region of interest detector 12 is robust to strong facial occlusions from face masks and other head garments. Subsequently, frame preprocessor 13 crops the frame to encapsulate the one or more ROI. In some embodiments, the cropping includes each frame being downsized by bi-cubic interpolation to reduce the number of image pixels to be processed. Alternatively, or additionally, the cropped frame may be further resized to a smaller image.
Sequence preparation system 14 aggregates batches of ordered sequences or subsequences of frames from frame processer 13 to be processed. Next, 3-Dimensional Convolutional Neural Network (3DCNN) 15 receives the sequence or subsequence of frames from the sequence preparation system 14. 3DCNN 15 processes the sequence or subsequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence or subsequence of frames and to produce a pulse waveform point for each frame of the sequence of frames. 3DCNN 15 applies a series of 3-dimensional convolution, averaging, pooling, and nonlinearities to produce a 1-dimensional signal approximating the pulse waveform 16 for the input sequence or subsequences.
In some configurations, pulse aggregation system 17 combines any number of pulse waveforms 16 from the sequences or subsequences of frames into an aggregated pulse waveform 18 to represent the entire video stream. Diagnostic extractor 19 is configured to compute the heart rate and the heart rate variability from the aggregated pulse waveform 18. To identify heart rate variability, the calculated heart rate of various subsequences may be compared. Display unit 20 receives real-time or near real-time updates from diagnostic extractor 19 and displays aggregated pulse waveform 18, heart rate, and heart rate variability to an operator. Storage Unit 21 is configured to store aggregated pulse waveform 18, heart rate, and heart rate variability associated with the subject.
Additionally, or alternatively, the sequence of frames may be partitioned into a partially overlapping subsequences within the sequence preparation system 14, wherein a first subsequence of frames overlaps with a second subsequence of frames. The overlap in frames between subsequences prevents edge effects. Here, pulse aggregation system 17 may apply a Hann function to each subsequence, and the overlapping subsequences added to generate aggregated pulse waveform 18 with the same number of samples as frames in the original video stream. In some configurations, each subsequence is individually passed to the 3DCNN 15, which performs a series of operations to produce a pulse waveform for each subsequence 16. Each pulse waveform output from the 3DCNN 15 is a time series with a real value for each video frame. Since each subsequence is processed by the 3DCNN 15 individually, they are subsequently recombined.
In some embodiments, one or more filters may be applied to the region of interest. For example, one or more wavelengths of LED light may be filtered out. The LED may be shone across the entire region of interest and surrounding surfaces or portions thereof. Additionally, or alternatively, temporal signals in non-skin regions may be further processed. For example, analyzing the eyebrows or the eye's sclera may identify changes strongly correlated with motion, but not necessarily correlated with the photplethysmogram. If the same periodic signal predicted as the pulse is found on non-skin surfaces, it may indicate a non-real subject or attempted security breach.
Although illustrated as a single system, the functionality of system 100 may be implemented as a distributed system. Further, the functionality disclosed herein may be implemented on separate servers or devices that may be coupled together over a network, such as a security kiosk coupled to a backend server. Further, one or more components of system 100 may not be included. For example, system 100 may be a smartphone or tablet device that includes a processor, memory, and a display, but may not include one or more of the other components shown in FIG. 1 . The embodiments may be implemented using a variety of processing and memory storage devices. For example, a CPU and/or GPU may be used in the processing system to decrease the runtime and calculate the pulse in near real-time. System 100 may be part of a larger system. Therefore, system 100 may include one or more additional functional modules.
FIG. 2 illustrates a computer-implemented method 200 for generating a pulse waveform according to an example embodiment of the present invention.
At 210, a video stream including a sequence of frames is captured. The video stream may include one or more of a visible-light video stream, a near-infrared video stream, and a thermal video stream of a subject. In some instances, method 200 combines at least two of the visible-light video stream, the near-infrared video stream, and/or the thermal video stream into a fused video stream to be processed. The visible-light video stream, the near-infrared video stream, and/or the thermal video stream are combined according to a synchronization device and/or one or more video analysis techniques.
Next, at 220, each frame of the video stream is processed to spatially locate a region of interest. The ROI may be a face, another body part (e.g., a hand, an arm, a foot, a neck, etc.), or any combination of body parts.
Subsequently, at 230, each frame of the video stream is cropped to encapsulate the region of interest. For example, the cropping may include each frame being downsized by bi-cubic interpolation to reduce the number of image pixels to be processed.
At 240, the sequence of frames is processed, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence of frames and to produce a pulse waveform point for each frame of the sequence of frames.
Lastly, at 250, a time series of pulse waveform points is generated to determine the pulse waveform of the subject for the sequence of frames. In some instances, the sequence of frames may be partitioned into a partially overlapping subsequences, wherein a first subsequence of frames overlaps with a second subsequence of frames. Here, a Hann function may be applied to each subsequence, and the overlapping subsequences added to generate the pulse waveform. In the various embodiments, the pulse waveform may be utilized to calculate a heart rate or heart rate variability. To identify heart rate variability, the calculated heart rate of various subsequences may be compared.
FIG. 3 illustrates a video based application 300 for generating a pulse waveform according to an example embodiment of the present invention. As illustrated in FIG. 3 , application 300 displays the captured video stream of subject 310. Each frame of the captured video stream is processed to spatially locate a region of interest 320. For example, region of interest 320 may encapsulate one or more body parts of subject 310, such as the face. Using the various techniques described herein, the pulse waveform 330 of subject 310 is generated and displayed.
FIG. 4 is a graphical representation 400 that illustrates an exponentially increasing dilation rate as a function of network depth. As illustrated, dilation rate is increased along the temporal axis of the 3D convolutions at depth d=1-4, giving increasing temporal receptive field while keeping kernel width constant at kt=5. Here, the embodiments of the present invention significantly improve the model by modifying the temporal dimension of the three-dimensional (3D) kernels with dilations as a function of their depth within the 3DCNN. The embodiments of the present invention significantly improve the model by providing a wider temporal context of the pulse signal.
The embodiments of the present invention may be readily applied to numerous applications and domains. Numerous, but non-exhaustive, examples will be discussed. In some embodiments, the techniques described herein may be applied at an immigration kiosk, border control booth, entry gate, or the like. In other embodiments, the techniques described herein may be applied at an electronic device (e.g., tablet, mobile phone, computer, etc.) that hosts a video analysis application, such as a social media application or health monitoring application. In yet other embodiments, the techniques described herein may be used to distinguish between liveness and conversely synthetic video (e.g., deep fake video) by checking for expected differences in the pulse waveform detected at respective regions of interest (e.g., in the face and hand regions of interest).
The techniques described herein may be readily applied to numerous health monitoring/telemedicine and other applications and domains. Examples include injury precursor detection, impairment detection, health or biometric monitoring (e.g., vitals, stroke, concussion, cognitive testing, recovery tracking, diagnostics, alerts, physical therapy, cognitive test, physical symmetry, and biometric collection), stress detection (e.g., anxiety, nervousness, excitement), epidemic monitoring, illness detection, infant monitoring (e.g., sudden infant death syndrome (SIDS)), monitoring interest in an activity (e.g., video application, focus group testing, gaming applications), monitoring for non-verbal communication dues and deception (e.g., gambling applications), monitoring for non-verbal communication dues. In addition, the techniques described herein may be readily applied to exercise engagement as well as entertainment, audience, and other monitoring applications.
By implementing the various embodiments, the video stream time duration for extracting information is reduced, and additional information is determined by analyzing the video stream. The embodiments were optimized and validated over short time intervals to produce reliable estimates of the pulse waveform rather than a description of the blood volume's frequency of periodic changes in blood volume.
It will be apparent to those skilled in the art that various modifications and variations can be made in the video based detection of pulse waveform of the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (21)

What is claimed is:
1. A computer-implemented method for generating a pulse waveform, the computer-implemented method comprising:
capturing a video stream including a sequence of frames;
processing each frame of the video stream to spatially locate a region of interest;
cropping each frame of the video stream to encapsulate the region of interest;
processing the sequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence of frames and to produce a pulse waveform point for each frame of the sequence of frames;
modifying the temporal dimension of at least one frame with one or more dilations; and
generating a time series of pulse waveform points to generate the pulse waveform of a subject for the sequence of frames.
2. The computer-implemented method according to claim 1, wherein the video stream includes one or more of a visible-light video stream, a near-infrared video stream, and a thermal video stream of a subject.
3. The computer-implemented method according to claim 2, further comprising:
combining at least two of the visible-light video stream, the near-infrared video stream, and the thermal video stream into a fused video stream.
4. The computer-implemented method according to claim 3, wherein the visible-light video stream, the near-infrared video stream, and/or the thermal video stream are combined according to a synchronization device.
5. The computer-implemented method according to claim 1, wherein the cropping includes each frame being downsized by bi-cubic interpolation to reduce the number of image pixels.
6. The computer-implemented method according to claim 1, wherein the region of interest includes a face.
7. The computer-implemented method according to claim 1, wherein the region of interest includes two or more body parts.
8. The computer-implemented method according to claim 1, further comprising:
partitioning the sequence of frames into partially overlapping subsequences,
wherein a first subsequence of frames overlaps with a second subsequence of frames.
9. The computer-implemented method according to claim 8, further comprising:
applying a Hann function to each subsequence;
adding the overlapping subsequences to generate the pulse waveform.
10. The computer-implemented method according to claim 1, further comprising:
calculating a heart rate or heart rate variability based on the pulse waveform.
11. A system for generating a pulse waveform, the system comprising:
a processor; and
a memory storing one or more programs for execution by the processor, the one or more programs including instructions for:
capturing a video stream including a sequence of frames;
processing each frame of the video stream to spatially locate a region of interest;
cropping each frame of the video stream to encapsulate the region of interest;
processing the sequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence of frames and to produce a pulse waveform point for each frame of the sequence of frames;
modifying the temporal dimension of at least one frame with one or more dilations; and
generating a time series of pulse waveform points to generate the pulse waveform of a subject for the sequence of frames.
12. The system according to claim 11, wherein the video stream includes one or more of a visible-light video stream, a near-infrared video stream, and a thermal video stream of a subject.
13. The system according to claim 12, further comprising:
combining at least two of the visible-light video stream, the near-infrared video stream, and the thermal video stream into a fused video stream.
14. The system according to claim 13, wherein the visible-light video stream, the near-infrared video stream, and/or the thermal video stream are combined according to a synchronization device.
15. The system according to claim 11, wherein the cropping includes each frame being downsized by bi-cubic interpolation to reduce the number of image pixels.
16. The system according to claim 11, wherein the region of interest includes a face.
17. The system according to claim 11, wherein the region of interest includes two or more body parts.
18. The system according to claim 11, further comprising:
partitioning the sequence of frames into partially overlapping subsequences, wherein a first subsequence of frames overlaps with a second subsequence of frames.
19. The system according to claim 18, further comprising:
applying a Hann function to each subsequence;
adding the overlapping subsequences to generate the pulse waveform.
20. The system according to claim 11, further comprising:
calculating a heart rate or heart rate variability based on the pulse waveform.
21. A non-transitory computer-readable medium having instructions stored thereon that, when executed by a processor, cause the processor to generate a pulse waveform, the instructions comprising:
capturing a video stream including a sequence of frames;
processing each frame of the video stream to spatially locate a region of interest;
cropping each frame of the video stream to encapsulate the region of interest;
processing the sequence of frames, by a 3-dimensional convolutional neural network, to determine the spatial and temporal dimensions of each frame of the sequence of frames and to produce a pulse waveform point for each frame of the sequence of frames;
modifying the temporal dimension of at least one frame with one or more dilations; and
generating a time series of pulse waveform points to generate the pulse waveform of a subject for the sequence of frames.
US17/591,929 2021-02-03 2022-02-03 Video based detection of pulse waveform Active 2043-09-13 US12343177B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/IB2022/050960 WO2022167979A1 (en) 2021-02-03 2022-02-03 Video based detection of pulse waveform
US17/591,929 US12343177B2 (en) 2021-02-03 2022-02-03 Video based detection of pulse waveform
TNP/2023/000194A TN2023000194A1 (en) 2021-02-03 2022-02-03 Video based detection of pulse waveform

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163145140P 2021-02-03 2021-02-03
US17/591,929 US12343177B2 (en) 2021-02-03 2022-02-03 Video based detection of pulse waveform

Publications (2)

Publication Number Publication Date
US20220240865A1 US20220240865A1 (en) 2022-08-04
US12343177B2 true US12343177B2 (en) 2025-07-01

Family

ID=82612104

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/591,929 Active 2043-09-13 US12343177B2 (en) 2021-02-03 2022-02-03 Video based detection of pulse waveform

Country Status (3)

Country Link
US (1) US12343177B2 (en)
TN (1) TN2023000194A1 (en)
WO (1) WO2022167979A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024209367A1 (en) * 2023-04-03 2024-10-10 Securiport Llc Liveness detection
WO2025215588A1 (en) * 2024-04-10 2025-10-16 Securiport Llc Video based unsupervised learning of periodic signals

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130296660A1 (en) 2012-05-02 2013-11-07 Georgia Health Sciences University Methods and systems for measuring dynamic changes in the physiological parameters of a subject
US20150223700A1 (en) * 2014-02-12 2015-08-13 Koninklijke Philips N.V. Device, system and method for determining vital signs of a subject based on reflected and transmitted light
US20180270436A1 (en) 2016-04-07 2018-09-20 Tobii Ab Image sensor for vision based on human computer interaction
EP3127485B1 (en) 2015-08-06 2019-10-23 Covidien LP System for local three dimensional volume reconstruction using a standard fluoroscope
US20200121256A1 (en) * 2018-10-19 2020-04-23 Microsoft Technology Licensing, Llc Video-based physiological measurement using neural networks
US20200337776A1 (en) 2019-04-25 2020-10-29 Surgical Safety Technologies Inc. Body-mounted or object-mounted camera system
WO2020247894A1 (en) 2019-06-07 2020-12-10 Eyetech Digital Systems, Inc. Devices and methods for reducing computational and transmission latencies in cloud based eye tracking systems

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130296660A1 (en) 2012-05-02 2013-11-07 Georgia Health Sciences University Methods and systems for measuring dynamic changes in the physiological parameters of a subject
US20150223700A1 (en) * 2014-02-12 2015-08-13 Koninklijke Philips N.V. Device, system and method for determining vital signs of a subject based on reflected and transmitted light
EP3127485B1 (en) 2015-08-06 2019-10-23 Covidien LP System for local three dimensional volume reconstruction using a standard fluoroscope
US20180270436A1 (en) 2016-04-07 2018-09-20 Tobii Ab Image sensor for vision based on human computer interaction
US20200121256A1 (en) * 2018-10-19 2020-04-23 Microsoft Technology Licensing, Llc Video-based physiological measurement using neural networks
US20200337776A1 (en) 2019-04-25 2020-10-29 Surgical Safety Technologies Inc. Body-mounted or object-mounted camera system
WO2020247894A1 (en) 2019-06-07 2020-12-10 Eyetech Digital Systems, Inc. Devices and methods for reducing computational and transmission latencies in cloud based eye tracking systems

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
De Haan, Gerard, and Vincent Jeanne. "Robust pulse rate from chrominance-based rPPG." IEEE transactions on biomedical engineering 60.10 (2013): 2878-2886. (Year: 2013). *
Gerald de Haan et al., "Robust Pulse Rate From Chrominance-Based rPPG", IEEE Transactions on Biomedical Engineering, vol. 60, No. 10, pp. 2878-2886, Oct. 2013.
International Search Report & Written Opinion of the ISR for corresponding PCT Application No. PCT/IB2022/050960 mailed May 11, 2022.
Ming-Zher Poh et al., "Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam", IEEE Transactions on Biomedical Engineering, vol. 58, No. 1, pp. 7-11, Jan. 2011.
Ming-Zher Poh et al., "Non-contact, automated cardiac pulse measurements using video imaging and blind source separation.", Optics Express, vol. 18, No. 10, pp. 10762-10774, May 10, 2010.
W. Wang et al., Algorithmic principles of remote-PPG, IEEE Transactions on Biomedical Engineering, 64(7), pp. 1479-1491, DOI: 10.1109/TBME.2016.2609282, Jan. 7, 2017.
Yu, Zitong, Xiaobai Li, and Guoying Zhao. "Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks." arXiv preprint arXiv:1905.02419 (2019). (Year: 2019). *
Zitong Yu et al., "Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks", RPPG Measurement Using Spatio-Temporal Networks, pp. 1-12, 2019.

Also Published As

Publication number Publication date
WO2022167979A1 (en) 2022-08-11
TN2023000194A1 (en) 2025-04-03
US20220240865A1 (en) 2022-08-04

Similar Documents

Publication Publication Date Title
Chen et al. Video-based heart rate measurement: Recent advances and future prospects
Casado et al. Face2PPG: An unsupervised pipeline for blood volume pulse extraction from faces
Alnaggar et al. Video-based real-time monitoring for heart rate and respiration rate
McDuff et al. iphys: An open non-contact imaging-based physiological measurement toolbox
Stricker et al. Non-contact video-based pulse rate measurement on a mobile service robot
US9928607B2 (en) Device and method for obtaining a vital signal of a subject
Gudi et al. Efficient real-time camera based estimation of heart rate and its variability
JP2013248386A (en) Processing of video for vascular pattern detection and cardiac function analysis
US20230274582A1 (en) Deception detection
EP3664690A1 (en) Device, system and method for determining a physiological parameter of a subject
WO2011127487A2 (en) Method and system for measurement of physiological parameters
US12343177B2 (en) Video based detection of pulse waveform
Wang et al. VitaSi: A real-time contactless vital signs estimation system
US20240161498A1 (en) Non-contrastive unsupervised learning of physiological signals from video
US20240334008A1 (en) Liveness detection
Wiede et al. Signal fusion based on intensity and motion variations for remote heart rate determination
US20250152029A1 (en) Promoting generalization in cross-dataset remote photoplethysmography
Ben Salah et al. Contactless heart rate estimation from facial video using skin detection and multi-resolution analysis
WO2025053287A1 (en) Cross-domain unrolling-based imaging photoplethysmography systems and methods for estimating vital signs
EP4287136A1 (en) System of vein location for medical interventions and biometric recognition using mobile devices
JP2021096537A (en) Biological information acquisition device, terminal device, biological information acquisition method, biological information acquisition program, and computer readable recording medium
CN118175956A (en) Device and method for multimodal non-contact vital signs monitoring
Sacramento et al. A real-time software to the acquisition of heart rate and photoplethysmography signal using two region of interest simultaneously via webcam
Abilash et al. Heartbeat rate estimation using convolutional neural network
Malasinghe et al. Remote heart rate extraction using microsoft kinecttm v2. 0

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: ALTER DOMUS (US) LLC, AS COLLATERAL AGENT, ILLINOIS

Free format text: SECURITY INTEREST;ASSIGNOR:SECURIPORT LIMITED LIABILITY COMPANY;REEL/FRAME:062060/0686

Effective date: 20221212

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

AS Assignment

Owner name: SECURIPORT LLC, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CARPENTER, NATHAN;OLIE, LEANDRO;REEL/FRAME:071180/0446

Effective date: 20250521

Owner name: UNIVERSITY OF NOTRE DAME DU LAC, INDIANA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SPETH, JEREMY;FLYNN, PATRICK;CZAJKA, ADAM;AND OTHERS;SIGNING DATES FROM 20250429 TO 20250430;REEL/FRAME:071180/0867

STCF Information on status: patent grant

Free format text: PATENTED CASE