[go: up one dir, main page]

WO2025227125A1 - Detecting congestive heart failure from electrocardiography data using machine learning - Google Patents

Detecting congestive heart failure from electrocardiography data using machine learning

Info

Publication number
WO2025227125A1
WO2025227125A1 PCT/US2025/026525 US2025026525W WO2025227125A1 WO 2025227125 A1 WO2025227125 A1 WO 2025227125A1 US 2025026525 W US2025026525 W US 2025026525W WO 2025227125 A1 WO2025227125 A1 WO 2025227125A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
heart failure
congestive heart
ecg
ecg data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/026525
Other languages
French (fr)
Inventor
Kevin Patrick COHOON
Sebastian BERISHA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medical College of Wisconsin
Original Assignee
Medical College of Wisconsin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medical College of Wisconsin filed Critical Medical College of Wisconsin
Publication of WO2025227125A1 publication Critical patent/WO2025227125A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • Electrocardiography (ECG) systems measure electrophysiological signals of cardiac activity in a subject. For example, voltages of the electrical activity' of the heart are measured using electrodes places on the subject’s skin. In a conventional 12-lead ECG, electrodes are placed on the subject’s limbs and chest. Although ECG data are representative of electrical activity of the heart, it is possible that additional information about the subject’s health can be derived or estimated from the ECG data.
  • the present disclosure addresses the aforementioned drawbacks by providing a method for generating classified feature data indicative of congestive heart failure in a subject.
  • the method includes accessing electrocardiography (ECG) data using a computer system, where the ECG data are acquired from a subject.
  • ECG electrocardiography
  • a machine learning model is also accessed with the computer system, where the machine learning model has been trained on training data to detect a presence of congestive heart failure based on ECG data.
  • the ECG data are applied to the machine learning model using the computer system, generating output as classified feature data indicative of congestive heart failure .
  • the classified feature data are output to a user.
  • FIG. 1 is a flowchart seting forth the steps of an example method for generating classified feature data indicative of congestive heart failure in a subject by applying electrocardiography (ECG) data to a machine learning model, such as a neural network.
  • ECG electrocardiography
  • FIG. 2 is a flowchart seting forth the steps of an example method for training a machine learning model, such as a neural network, to detect the presence or likelihood of a subject having congestive heart failure based on ECG data obtained from the subject.
  • a machine learning model such as a neural network
  • FIG. 3 illustrates an example wearable device that can be used to record physiological data, such as ECG data, and to generate classified feature data indicative of congestive heart failure.
  • FIG. 4 is a block diagram of example components that can implement the wearable device of FIG. 3.
  • FIG. 5 illustrates an example system for detecting congestive heart failure based on ECG data.
  • FIG. 6 is a block diagram of example components that can implement the system of FIG. 5.
  • ECG electrocardiograph
  • the ECG data obtained from the subject which may be standard 12-lead ECG data
  • a neural network or other machine learning algorithm which generates output data as classified feature data indicating the presence and/or likelihood of congestive heart failure.
  • the systems and methods described in the present disclosure provide a point- of-care test for detecting congestive heart failure in a subject, which does not require specialized imaging or other clinical tests.
  • the systems and methods described in the present disclosure utilize a neural network or other machine learning or Al algorithm to detect, identify, or otherwise characterize subtle paterns in ECG data that are indicative of the presence of congestive heart failure. Further, the systems and methods are capable of differentiating various levels of congestive heart failure, various functional defects associated with congestive heart failure, and so on. As a result, the disclosed systems and methods can be used as an initial screening tool in a hospital or clinical-based seting and translated to a point-of-care test that can be delivered through a portable or potentially a wearable device.
  • the disclosed systems and methods utilize a machine learning model to detect and characterize congestive heart failure from a standard 12-lead ECG at the point of injury.
  • the disclosed systems and methods provide a cost- effective. non-invasive, low-risk intervention to patients or other individual users that can augment existing methods to detect and characterize congestive heart failure from a standard 12-lead ECG.
  • the disclosed systems and methods can be implemented using a wearable patch or other wearable device with one or more channels and wearable elements, including shirts, watches, bands, and bracelets with conductive elements capable of recording physiologic signals. In still other embodiments.
  • ECG data can be collected from implanted devices such as loop recorders, pacemakers, or defibrillators.
  • ECG data can be collected from contactless sensors, such as RF-based sensors.
  • classified feature data can be generated from the recorded ECG data, processed by a machine learning model or algorithm to generate classified feature data indicative of congestive heart failure, and allow for notifying the user or clinicians that conditions of congestive heart failure, or other functional defects related to congestive heart failure, have been detected (e.g., via an alert or message).
  • Inputs from various different data sources can be integrated into a single output to provide a scalable and automated means for clinicians to analyze whether congestive heart failure may be present by analyzing only a standard 12-lead or ECG data obtained with other physiological sensors (e g., a wearable device, an implanted device, a contactless RF-based sensor).
  • physiological sensors e g., a wearable device, an implanted device, a contactless RF-based sensor.
  • the ECG data can be input to one or more artificial intelligence and/or machine learning (AI/ML) models to generate classified feature data indicative of the presence of a neurological condition in the subject from whom the ECG data were obtained.
  • AI/ML machine learning
  • Any suitable AI/ML or other computational modelling can be used for the analysis of the ECG data, including deep learning, a generative adversarial network (GAN), a convolutional neural network (CNN), a large language modeling (LLM), foundation models, Riffusion, anomaly detection, diffusion, etc.
  • GAN generative adversarial network
  • CNN convolutional neural network
  • LLM large language modeling
  • foundation models e.g., Riffusion, anomaly detection, diffusion, etc.
  • classical machine learning algorithms and models e.g., random forest, support vector machines, naive Bayes classifiers, nearest neighbors, decision trees, AdaBoost, QDA, Gaussian process, etc.
  • AdaBoost AdaBoost
  • QDA Gaussian process, etc.
  • these classical machine learning models could be used for the analysis of ECG data and/or related patient health data, or could advantageously be used to initially determine which AI/ML algorithm or model is likely to provide the highest accuracy to develop further models for the analysis of ECG data and/or related patient health data for detecting the presence of a neurological condition in the subject from whom the ECG data were obtained.
  • the neural network or other machine learning algorithm takes as input ECG data (e.g., 12-lead ECG measurement data) and/or transformed ECG data (e.g., scalogram data, spectrogram data, other N-dimensional data generated from ECG data) and generates output as classified feature data indicative of congestive heart failure in the subject from whom the ECG data were obtained.
  • ECG data e.g., 12-lead ECG measurement data
  • transformed ECG data e.g., scalogram data, spectrogram data, other N-dimensional data generated from ECG data
  • the neural network or other machine learning algorithm may take transformed ECG data (e.g., scalogram data, spectrogram data, or other N-dimensional (for N > 2) images, maps, matrices, or data structures generated from ECG data) as an input.
  • transformed ECG data e.g., scalogram data, spectrogram data, or other N-dimensional (for N > 2) images, maps, matrices, or data structures generated from ECG data
  • the method includes accessing ECG data with a computer system, as indicated at step 102.
  • Accessing the ECG data may include retrieving such data from a memory or other suitable data storage device or medium.
  • accessing the ECG data may include acquiring such data with a wearable device or an ECG system (e.g., an ECG measurement system using a 12-lead configuration, or other lead or electrode combination) and transferring or otherwise communicating the data to the computer system, which may be a part of the wearable device or ECG system.
  • a wearable device or an ECG system e.g., an ECG measurement system using a 12-lead configuration, or other lead or electrode combination
  • the ECG data may include ECG signals. Additionally or alternatively, the ECG data may include variables, parameters, or other measurements that are computed, extracted, or otherwise derived from ECG signals.
  • the ECG data may include ECG measurements such as ventricular rate in beats per minute (bpm), atrial rate in bpm, P-R interval in milliseconds (ms), QRS duration in ms, Q-T interval in ms, QTC Bazett’s algorithm, P axis, R axis, T axis, QRS count, P-wave onset in median beat, P-wave offset in median beat, Q-onset in median beat, Q-offset in median beat.
  • T-onset in median beat T-offset in median beat, number of QRS Complexes, QRS duration, QT interval.
  • QT corrected PR interval, ventricular rate, average R-R Interval, Q-onset (median complex sample point), Q-offset (median complex sample point), P-onset (median complex sample point), P-offset (median complex sample point), T-onset (median complex sample point), QT calculated with the Frederica algorithm, P-wave amplitude at P-onset, P-wave amplitude, P-wave duration, P-wave area, P-wave intrinsicoid (time from P-onset to peak of P).
  • P-prime amplitude, P-prime duration, P-prime area, P-prime intrinsicoid time from P-onset to peak of P-prime
  • Q-wave amplitude, Q-wave duration, Q-wave area, Q intrinsicoid time from Q-onset to peak of Q
  • S amplitude, S duration, S-wave area S intrinsicoid (time from Q onset to peak of S)
  • a 12-lead ECG system can include a I Lateral lead (also referred to as a l lead), a II Inferior lead (also referred to as a II lead), a III Inferior lead (also referred to as a III lead), an aVR lead, an aVL Lateral lead (also referred to as an aVL lead), an aVF Inferior lead (also referred to as an aVF lead), a V 1 Septal lead (also referred to as a V 1 lead), a V2 Septal lead (also referred to as a V2 lead), a V3 Anterior lead (also referred to as a V3 lead), a V4 Anterior lead (also referred to as a V4 lead), a V5 Lateral lead (also referred to as aV5 lead), and a V6 Lateral lead (also referred to as aV6 lead).
  • the ECG system can implement fewer than
  • the ECG data may be obtained using an RF-based sensor device.
  • RF-based sensors are capable of measuring ECG signals in addition to other biophysical signals (e.g., heart beats, respiratory’ rates) using transmitted RF waves, which in some instances may include RF waves transmitted according to Wi-Fi or other wireless network protocol.
  • Such sensors enable contactless measurement of ECG data or other biophysical data.
  • the RF-based sensors can be implemented in a standalone device, integrated into a mobile device (e.g., a smartphone, a tablet computer), integrated into a wearable device (e.g., a smartwatch, a fitness tracker, a wearable patch, a band, a bracelet), integrated into other wearables (e.g., a shirt or other wearable garment with conductive elements capable of recording physiologic signals), integrated into an implanted device (e g., loop recorders, pacemakers, defibrillators), integrated into other medical devices (e.g., digital stethoscopes), or integrated into another device or system (e.g., an automobile or other vehicle, such as an autonomous vehicle that can transport an individual to a clinic or hospital if a condition is detected).
  • a mobile device e.g., a smartphone, a tablet computer
  • a wearable device e.g., a smartwatch, a fitness tracker, a wearable patch, a band, a bracelet
  • radiofrequency signals can detect respiratory rates and other signals that can then be synchronously combined with ECG data, phonocardiogram (PCG) data, and/or continuous arterial blood pressure waveform data and used in the neural networks or other Al models described in the present disclosure.
  • ECG ECG
  • PCG phonocardiogram
  • accessing the ECG data may include accessing transformed ECG data.
  • Accessing the transformed ECG data may include retrieving such data from a memory or other suitable data storage device or medium.
  • accessing the transformed ECG data may include generating such data with a computer system.
  • the transformed ECG data may be generated from ECG data acquired from the subject.
  • transformed ECG data may include scalogram data, spectrogram data, or other N-dimensional data generated from ECG data.
  • scalogram data may include one or more scalograms generated from ECG data.
  • a scalogram includes an image, map, or other N- dimensional matrix or data structure depicting the time-frequency distribution of ECG data.
  • a scalogram may be generated by computing a continuous wavelet transform (CWT) of the ECG data and constructing the scalogram based on the CWT coefficients.
  • CWT continuous wavelet transform
  • the scalogram may be depicted as a heat map or other image.
  • spectrogram data may include one or more spectrograms generated from ECG data.
  • a spectrogram includes an image, map, or other N- dimensional matrix or data structure depicting a spectrum of frequencies of ECG signals in the ECG data as they vary with time.
  • a spectrogram may be generated by computing a Fourier transform of the ECG signals in the ECG data and constructing the spectrogram based on the coefficients of the Fourier transform.
  • the scalogram may be depicted as a heat map or other image.
  • N-dimensional matrices or other data structures may be generated from the ECG signals in the ECG signal data.
  • other N-dimensional matrices may include Gramian angular field maps, recurrence plot maps, and/or Markov transition field maps. These images could also be fused by combining scalogram, spectrogram, Gramian angular field, recurrence plot, Markov transition field, and/or other N-dimensional images, maps, or matrices to form multimodal image fusion data and multimodal feature fusion data.
  • other physiological data can be accessed, including other electrophysiology data (e.g., EEG, data, EMG data), PPG data, etc.
  • other physiological data may include echocardiogram data, such as echocardiograms, echocardiogram findings, echocardiogram variables, or the like.
  • patient characteristics derived from medical imaging, ECG data, other physiological data, or the like may also be accessed.
  • patient characteristics may include data derived from a cardiac catheterization procedure, such as presence of coronary artery disease (CAD), occlusions, thrombolysis in myocardial infarction (TIMI) flow, or the like.
  • CAD coronary artery disease
  • TIMI myocardial infarction
  • additional data may be accessed, such as patient health data.
  • the patient health data may include data stored in, retrieved from, extracted from, or otherwise derived from the patient’s electronic medical record (EMR) and/or electronic health record (EHR).
  • EMR electronic medical record
  • EHR electronic health record
  • the patient health data can include unstructured text, questionnaire response data, clinical laboratory data, histopathology data, genetic sequencing, medical imaging, and other such clinical data types.
  • Examples of clinical laboratory data and/or histopathology data can include genetic testing and laboratory information, such as performance scores, lab tests, pathology' results, prognostic indicators, date of genetic testing, testing method used, and so on.
  • Patient health data can include a set of clinical features associated with information derived from clinical records of a patient, which can include records from family members of the patient. These clinical features and data may be abstracted from unstructured clinical documents, EMR, EHR, or other sources of patient history. Such data may include patient symptoms, diagnosis, treatments, medications, therapies, responses to treatments, laboratory testing results, medical history, geographic locations of each, demographics, or other features of the patient which may be found in the patient’s EMR and/or EHR.
  • features derived from structured, curated, and/or EMR or EHR data may include clinical features such as diagnoses; symptoms; therapies; outcomes; patient demographics, such as patient name, date of birth, gender, and/or ethnicity; diagnosis dates for cancer, illness, disease, or other physical or mental conditions; personal medical history; family medical history; clinical diagnoses, such as date of initial diagnosis; and the like.
  • patient health data may also include features such as treatments and outcomes, such as line of therapy, therapy groups, clinical trials, medications prescribed or taken, surgeries, imaging, adverse effects, and associated outcomes.
  • the patient health data may also include measurement data collected from wearable devices (e.g., physiological measurements or other data recorded with a wearable device).
  • Physiological measurements that may be recorded with a wearable device include heart rate, temperature, or other physical parameters.
  • the patient health data may also include epidemiological data on the prevalence and incidence of relevant diseases, such as weekly observed incidence of new cardiac disease cases, which may change over time.
  • This epidemiological data can be sourced from public health records, patient registries, and other relevant databases. Integrating these disease trends into the model can help ensure that the model is not only learning from the ECG signals themselves, but also from the shifting cardiac disease landscape, thereby improving its ability to capture evolving patterns in ECG readings tied to specific health conditions.
  • a trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system, as indicated at step 104.
  • the neural network is trained, or has been trained, on training data in order to detect, identify, or otherwise characterize patterns in ECG data and/or transformed ECG data that are indicative of congestive heart failure.
  • Accessing the trained neural network may include accessing network parameters (e.g., weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data.
  • retrieving the neural network can also include retrieving, constructing, or otherwise accessing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hyperparameters for layers) may be retrieved, selected, constructed, or otherwise accessed.
  • An artificial neural network generally includes an input layer, one or more hidden layers (or nodes), and an output layer.
  • the input layer includes as many nodes as inputs provided to the artificial neural network.
  • the number (and the type) of inputs provided to the artificial neural network may vary based on the particular task for the artificial neural network.
  • the input layer connects to one or more hidden layers.
  • the number of hidden layers varies and may depend on the particular task for the artificial neural network. Additionally, each hidden layer may have a different number of nodes and may be connected to the next layer differently. For example, each node of the input layer may be connected to each node of the first hidden layer. The connection between each node of the input layer and each node of the first hidden layer may be assigned a weight parameter. Additionally, each node of the neural network may also be assigned a bias value. In some configurations, each node of the first hidden layer may not be connected to each node of the second hidden layer. That is, there may be some nodes of the first hidden layer that are not connected to all of the nodes of the second hidden layer.
  • Each node of the hidden layer is generally associated with an activation function.
  • the activation function defines how the hidden layer is to process the input received from the input layer or from a previous input or hidden layer. These activation functions may vary and be based on the type of task associated with the artificial neural network and also on the specific type of hidden layer implemented.
  • Each hidden layer may perform a different function.
  • some hidden layers can be convolutional hidden layers which can, in some instances, reduce the dimensionality of the inputs.
  • Other hidden layers can perform statistical functions such as max pooling, which may reduce a group of inputs to the maximum value; an averaging layer; batch normalization; and other such functions.
  • max pooling which may reduce a group of inputs to the maximum value
  • an averaging layer which may be referred to then as dense layers.
  • Some neural networks including more than, for example, three hidden layers may be considered deep neural networks.
  • the last hidden layer in the artificial neural network is connected to the output layer. Similar to the input layer, the output layer typically has the same number of nodes as the possible outputs.
  • the ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data are then input to the one or more trained neural networks, generating output as classified feature data, as indicated at step 106.
  • the classified feature data may include a congestive heart failure risk score.
  • the congestive heart failure risk score can provide physicians or other clinicians with a recommendation to consider additional monitoring (e.g., cardiac monitoring, respiratory monitoring) for subjects whose ECG data indicate the likelihood of the subject suffering from congestive heart failure.
  • the classified feature data may indicate the probability for a particular classification (i.e...
  • the classified feature data may also indicate whether a subject’s vitals or conditions are improving, worsening, or staying the same over a particular time span.
  • the classified feature data may classify the ECG data as indicating a particular type of congestive heart failure, such as congestive heart failure with reduced ejection fraction, congestive heart failure with preserved ejection fraction, systolic congestive heart failure, diastolic congestive heart failure, left-sided congestive heart failure, right-sided congestive heart failure, biventricular failure, or the like.
  • the classified feature data can differentiate between different types of congestive heart failure and/or between different subtypes of congestive heart failure.
  • the classified feature data can differentiate one type of congestive heart failure from other types (e.g., leftsided congestive heart failure versus right-sided congestive heart failure) in addition to differentiating a particular subtypes (e.g., congestive heart failure with reduced ejection fraction, congestive heart failure with preserved ejection fraction, left-sided systolic congestive heart failure, left-sided diastolic congestive heart failure) from other subtypes.
  • types e.g., leftsided congestive heart failure versus right-sided congestive heart failure
  • a particular subtypes e.g., congestive heart failure with reduced ejection fraction, congestive heart failure with preserved ejection fraction, left-sided systolic congestive heart failure, left-sided diastolic congestive heart failure
  • the classified feature data can indicate a stage of congestive heart failure.
  • the classified feature data may indicate whether the ECG data include patterns, features, or characteristics indicative of Stage I congestive heart failure, Stage II congestive heart failure. Stage III congestive heart failure, or Stage IV congestive heart failure.
  • the classified feature data can differentiate between different underlying causes and/or precipitating factors of congestive heart failure, such as congestive heart failure caused by coronary artery disease, congestive heart failure caused by hypertension, congestive heart failure caused by valvular heart disease, congestive heart failure caused by myocarditis, or the like.
  • the classified feature data may indicate a severity of congestive heart failure.
  • the classified feature data may include a severity score that quantifies a severity of congestive heart failure.
  • the classified feature data may also indicate a CHF score that quantifies heart function, a symptom of congestive heart failure, an underlying cause of congestive heart failure, and/or a precipitating factor of congestive heart failure.
  • the CHF score may quantify ejection fraction (EF) for the subject.
  • the classified feature data generated by inputting the ECG data to the trained neural network(s) can then be displayed to a user, stored for later use or further processing, or both, as indicated at step 108.
  • FIG. 2 a flowchart is illustrated as setting forth the steps of an example method 200 for training one or more neural networks (or other suitable machine learning algorithms) on training data, such that the one or more neural networks are trained to receive input as ECG data in order to generate output as classified feature data indicative of congestive heart failure.
  • the neural network(s) can implement any number of different neural network architectures.
  • the neural network(s) could implement a convolutional neural network, a residual neural network, or the like.
  • the neural network(s) may implement deep learning.
  • the neural network(s) could be replaced with other suitable machine learning or artificial intelligence algorithms, such as those based on supervised learning, unsupervised learning, deep learning, ensemble learning, dimensionality reduction, and so on.
  • a large language model, generative pre-trained transformer model, or other foundation model may also be used, as described in more detail below.
  • the method includes accessing training data with a computer system, as indicated at step 202.
  • Accessing the training data may include retrieving such data from a memory or other suitable data storage device or medium.
  • accessing the training data may include acquiring such data with a wearable device, a network of wearable devices, an RF-based sensor, an ECG system, or the like, and transferring or otherwise communicating the data to the computer system.
  • the training data can include ECG data and/or transformed ECG data obtained from a plurality of subjects.
  • the ECG data may be obtained using 12-lead configurations, or fewer leads (e g., single lead, three leads, six leads, and the like).
  • the transformed ECG data may include scalogram data, spectrogram data, or other N-dimensional data generated from ECG data acquired from the plurality' of subjects.
  • the training data may include other data, such as patient health data or other health information collected from the subjects.
  • the training data may include ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data that have been labeled (e.g., labeled as containing patterns, features, or characteristics indicative of congestive heart failure; labeled as being collected from a subject having a particular type, subtype, and/or stage of congestive heart failure; and the like).
  • the method can include assembling training data from ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data using a computer system.
  • This step may include assembling the ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data into an appropriate data structure on which the machine learning algorithm can be trained.
  • Assembling the training data may include assembling ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data, segmented ECG data and/or transformed ECG data, and other relevant data.
  • assembling the training data may include generating labeled data and including the labeled data in the training data.
  • Labeled data may include ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data, segmented ECG data and/or transformed ECG data, or other relevant data that have been labeled as belonging to, or otherwise being associated with, one or more different classifications or categories.
  • labeled data may include ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data, segmented ECG data, and/or segmented transformed ECG data that have been labeled as being associated with a subject having congestive heart failure, a type of congestive heart failure, a subty pe of congestive heart failure, and/or a stage of congestive heart failure.
  • ECG data can labeled based on left ventricular ejection fraction data as normal (LVEF 50% to 70%), mild dysfunction (LVEF 40% to 49%). moderate dysfunction (LVEF 30% to 39%), or severe dysfunction (LVEF less than 30%).
  • synthetic training data may be generated.
  • techniques such as diffusion models can be modified to create realistic ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data from text prompts.
  • Synthetic training data produced by these models can augment real training data to increase diagnostic accuracy. This form of synthetic data can help solve the detection and treatment of uncommon diseases through Al learning problems for which training data is scarce.
  • One or more neural networks are trained on the training data, as indicated at step 204.
  • the neural network can be trained by optimizing network parameters (e.g., weights, biases, or both) based on minimizing a loss function.
  • the loss function may be a mean squared error loss function, a cross-entropy loss function, or the like.
  • Training a neural network may include initializing the neural network, such as by computing, estimating, or otherwise selecting initial network parameters (e.g.. weights, biases, or both).
  • initial network parameters e.g.. weights, biases, or both.
  • an artificial neural netw ork receives the inputs for a training example and generates an output using the bias for each node, and the connections between each node and the corresponding weights.
  • training data can be input to the initialized neural network, generating output as classified feature data.
  • the artificial neural network compares the generated output with the actual output of the training example in order to evaluate the quality of the classified feature data.
  • the classified feature data can be passed to a loss function to compute an error.
  • the current neural netw ork can then be updated based on the calculated error (e.g., using backpropagation methods based on the calculated error). For instance, the current neural network can be updated by updating the network parameters (e g., weights, biases, or both) in order to minimize the loss according to the loss function.
  • the training continues until a training condition is met.
  • the training condition may correspond to, for example, a predetermined number of training examples being used, a minimum accuracy threshold being reached during training and validation, a predetermined number of validation iterations being completed, and the like.
  • the training condition has been met (e g., by determining whether an error threshold or other stopping criterion has been satisfied)
  • the current neural network and its associated netw ork parameters represent the trained neural network.
  • the training processes may include, for example, gradient descent, Newton’s method, conjugate gradient, quasi -Newton. Levenberg-Marquardt, among others.
  • the artificial neural netw ork can be constructed or otherwise trained based on training data using one or more different learning techniques, such as supervised learning, unsupervised learning, reinforcement learning, ensemble learning, active learning, transfer learning, or other suitable learning techniques for neural networks.
  • supervised learning involves presenting a computer system with example inputs and their actual outputs (e.g., categorizations).
  • the artificial neural network is configured to learn a general rule or model that maps the inputs to the outputs based on the provided example input- output pairs.
  • the model may additionally or alternatively be trained to output interval measurements from the input ECG data.
  • the model may be trained on pairs of RF-based ECG data and 12-lead ECG data, on which interval measurements have been annotated.
  • the one or more trained neural networks are then stored for later use, as indicated at step 206.
  • Storing the neural network(s) may include storing network parameters (e.g., weights, biases, or both), which have been computed or otherwise estimated by training the neural network(s) on the training data.
  • Storing the trained neural network(s) may also include storing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hy perparameters for layers) may be stored.
  • the model is continuously retrained and/or updated using new data from specific time periods. This approach allows for monitoring how well the model may adapt to new changing patterns in ECG data. Additionally or alternatively, this approach allows for assessing it’s the predictive performance of the model on evolving datasets.
  • the model may be initially trained on ECG data from a defined period using the techniques described in the present disclosure. The defined period may be a period of days, weeks, month, years, or other time scales. For example, the defined period may be a period of a few years, such as 2019- 2022.
  • Performance metrics including AUC (i.e. , area under the curve for a receiver operating characteristic (ROC) curve), as well as its derivatives (e.g., sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV)) at a predefined cutoff may be calculated to assess the abi lity of the model to differentiate between normal and abnormal ECG readings.
  • AUC area under the curve for a receiver operating characteristic (ROC) curve
  • NPV negative predictive value
  • PPV positive predictive value
  • the model may be tested on new, unseen ECG data from a subsequent time period (e.g., subsequent day or days, subsequent week or weeks, subsequent month or months, subsequent year or years, other subsequent time scales).
  • a subsequent time period e.g., subsequent day or days, subsequent week or weeks, subsequent month or months, subsequent year or years, other subsequent time scales.
  • the pretrained model may be tested on new unseen ECG data from a subsequent year, such as 2023.
  • This testing phase allows for the evaluation of the generalizability of the model. Additionally or alternatively, this testing phase may be used to assess whether the model maintains its predictive accuracy when applied to a different time period.
  • Performance metrics can be composed across the datasets from the different time periods (e.g., 2019-2022 training dataset and 2023 test set, in the described example) to evaluate how well the model adapts to potential shifts in ECG patterns and to identify any degradation in model performance.
  • the model may be retrained using an expanding dataset, for example, including data from 2019 to 2023 to forecast performance for 2024. That is, the subsequent data set used to test the model may be appended or otherwise concatenated with the original training data set and the updated model may then be trained on a newer subsequent data set associated with another subsequent time period. The model will again be tested using the newer subsequent data set (e.g., 2024 data, in the described example), and performance metrics will be recalculated to ensure continued accuracy and robustness of the model. This process may be repeated annually, monthly, weekly, or over other time scales, with the training dataset progressively growing to include more recent data, ensuring that the model stays current with emerging trends in ECG signals and clinical conditions.
  • the newer subsequent data set e.g., 2024 data, in the described example
  • the aging of the model may be monitored by comparing its ROC curves over time. By visualizing how the ROC curve evolves with each successive test period, any changes in the ability of the model to distinguish between classes can be observed.
  • a decline in the AUC may signal potential performance issues, while improvements in the AUC may indicate that the model is adapting well to new data.
  • subgroup analysis can be conducted to assess whether any biases arise as the model encounters different patient populations or demographic shifts over time.
  • the model may be periodically fine-tuned, retrained, and adjusted to incorporate the latest data. This iterative process ensures that the model remains aligned with clinical needs and continues to provide reliable predictions for ECG analysis, even as the data evolves.
  • foundation models can be used to additionally or alternatively process the ECG data, transformed ECG data (e.g., scalogram data, spectrogram data, other N-dimensional data generated from ECG data), other physiological data, patient characteristics, and/or patient health data.
  • Foundation models can receive various types of data (e.g., text data, image data, sound data, other ID. 2D, and/or 3D data types) and generate various types of clinically relevant outputs.
  • the foundation models may generate outputs as predictive scores, text outputs, classifications, and so on.
  • text outputs may include answers to questions posed by a clinical user (e.g., medical question answering), interpretive reports of ECG data and/or transformed ECG data, or other text-based reports and/or summaries of the input data.
  • a clinical user e.g., medical question answering
  • interpretive reports of ECG data and/or transformed ECG data or other text-based reports and/or summaries of the input data.
  • generating reports across temporal and spatial domains with the ECG data and/or transformed ECG data is advantageous for identifying if a patient’s vitals and/or conditions are improving, worsening, or staying the same.
  • a language model such as a large language model (LLM)
  • LLM large language model
  • Large language modeling with ECG data and/or transformed ECG data involves using machine learning models (e.g., deep learning models) to process and analyze ECG data, transformed ECG data, and associated clinical text data.
  • This approach can combine natural language processing (NLP) with computer vision or other multimodal techniques to understand the content of ECG data, transformed ECG data, medical images, or other physiologic signal data and extract relevant information from them.
  • NLP natural language processing
  • large language modeling can be applied to ECG data, transformed ECG data, and/or medical images to enable automatic analysis of the input data and to extract clinically relevant information, such as the presence of congestive heart failure. This approach can help healthcare professionals make more accurate diagnoses and develop personalized treatment plans for patients.
  • One example method for large language modeling with ECG data and/or transformed ECG data uses CNNs to extract features from the input data, which are then fed into a recurrent neural network (RNN) to generate text descriptions of the ECG data and/or transformed ECG data.
  • RNN recurrent neural network
  • the RNN can also be used to generate clinical reports based on the input data.
  • Another example approach is to use a transformer-based model, such as a BERT or GPT-4 model, to analyze both the ECG data (or transformed ECG data) and accompanying text data.
  • These models can be pre-trained on large datasets of ECG data and/or transformed ECG data and associated text to improve their accuracy and ability to identify relevant features in the input data.
  • the ECG data and/or transformed ECG data can be provided as an input to the LLM.
  • the ECG data and/or transformed ECG data are first tokenized and converted into a numerical format.
  • the tokenized input data may then be applied to an embedding layer to transform each tokenized input into a high-dimensional vector that captures relationships between the ECG data and/or transformed ECG data.
  • the embedding layer may additionally or alternatively capture semantic relationships between text data and the ECG data and/or transformed ECG data.
  • the resulting embedded vectors form a one-dimensional sequence that will be input to the LLM. Each element of the embedded vectors corresponds to a token in the tokenized input data.
  • the LLM may be based on a recurrent layer model, such as a long short term memory (LSTM) model.
  • the LLM may be based on an attention mechanism (e.g., transformer).
  • the LLM may be based on a generative pre-trained transformer (GPT) model.
  • the LLM may be based on combinations of such model types.
  • GPT is a type of large language model that is pre-trained on a massive corpus of text data and can generate human-like language. Language models can also be trained on data from other modalities (e.g., images, audio recordings, videos, etc.) to enable more diverse capabilities, provide a stronger learning signal, and increase learning speed.
  • the GPT model is based on the transformer architecture, which allows it to process long sequences of text efficiently. GPT can be used in a wide range of natural language processing (NLP) tasks, including language translation, text summarization, and question answering.
  • NLP natural language processing
  • ECG data and transformed ECG data contain valuable information that can aid in the diagnosis and treatment of various medical conditions.
  • GPT-based language modeling with ECG data (or transformed ECG data) analysis
  • the disclosed systems and methods can analyze the input ECG data and/or transformed ECG data to generate textual reports summarizing the findings.
  • a GPT-based model can be trained on a large corpus of ECG data (and/or transformed ECG data) and medical reports and then used to generate reports for new ECG and/or transformed ECG data.
  • the generated reports can include information such as the location and size of any abnormalities associated with congestive heart failure, as well as recommendations for further testing or treatment.
  • LLMs can also be used to develop a chatbot based on a foundation model that can serve as a physician's assistant to support more accurate diagnosis and tailored therapy selection. These capabilities can improve the accuracy and efficiency of patient care while increasing patient engagement and adherence to therapy. Once diagnosis is determined as a result, the output can be incorporated into the patient’s medical documentation through electronic medical records.
  • a foundation model can then generate tailored patient education materials and explain their care plan at the appropriate reading level based on the clinical documents.
  • the models can also be used to draft a clinic note in real-time based on the results.
  • the models can also be used to optimize clinic scheduling or to simplify generation of medical codes for billing (e.g., current procedural terminology (CPT) codes), disease surveillance, and even automated follow-up reminders.
  • CPT current procedural terminology
  • the code to these models can be automated and/or updated using an LLM, GPT, or the like.
  • LLM low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency low-latency, or the like.
  • auto-GPT can be used to write and update its own code and execute scripts. This allows the model to recursively debug, develop, and selfimprove. While input data are applied to an auto-GPT-based model, the model can update itself automatically.
  • multimodal language modeling can be used to receive multiple sources of information to train a language model.
  • this can mean combining textual information from clinical notes, laboratories, or reports with visual information from ECG data, transformed ECG data, or other sources, including for example medical images such as x-rays, CT scans, or MRIs.
  • One approach to multimodal language modeling is to use the GPT architecture, which can be effective in a variety of NLP tasks.
  • the GPT architecture is based on a transformer network, which can leam to model long-range dependencies between words in a sentence.
  • visual information can be incorporated into the GPT model through pretraining with contrastive learning. This process involves training the model to predict which ECG and text pairs are related, while also ensuring that unrelated pairs are distinguishable from each other.
  • the resulting model can then be fine-tuned for specific tasks, such as ECG data captioning, medical report generation, or disease diagnosis.
  • a model trained on clinical notes and ECG signals or transformed ECG signals could be fine-tuned to generate reports to predict the presence of congestive heart failure or a particular type, subtype, or stage of congestive heart failure based on the ECG data or transformed ECG data.
  • a visual transformer-based model can be used.
  • higher-dimensional inputs can be provided to the underlying foundation model.
  • the ECG data and/or transformed ECG data may be tokenized and embedded, as described above, into a 2D or other N-dimensional (N > 2) vector, matrix, or tensor.
  • N > 2D or other N-dimensional (N > 2) vector, matrix, or tensor These higher dimensional input data can then be applied to a suitable foundation model, such as a visual transformer-based model.
  • Vision-language processing can be improved by using paired samples sharing semantics. For instance, given an image and text pair, the text should describe the image with minimal extraneous detail.
  • temporal information in the text modality could pertain to any image including “condition”, creating vagueness during contrastive training.
  • Vision-language processing implementations can, in some instances, assume alignment between single images and reports, removing temporal content from reports in training data to prevent hallucinations in downstream report generation.
  • temporal information can provide complementary self-supervision by using an existing structure without requiring additional data. Rather than treating all image-report pairs in the dataset as independent, temporal correlations can be used by making previous images available for comparison to a given report.
  • a temporal vision-language processing pre-training framework can be learned from this structure.
  • a multi-image encoder that can handle the absence of previous images and potential spatial misalignment between images across time can be used in this vision-language processing. Prior images can be accounted for where available, thereby removing cross-modal ambiguity.
  • Linking multiple images or datasets has the advantage of improving image and text models and performance on temporal image (or ECG data) classification and report generation. Prefixing the prior report can significantly improve performance. When available during training and fine-tuning, earlier images and labels can also be accounted for.
  • a convolutional neural network (CNN) or LLM visual transformer hybrid multi-image encoder can be trained jointly with a text model.
  • the hybrid model can provide improved processing for tasks in both single-image and multi-image structures, achieving performance on disease progression classification, phrase grounding, and document generation, while offering consistent modifications on disease category and sentence-similarity tasks.
  • the similarity 7 betw een the ECG data, transformed ECG data, or other signals and text embeddings can be computed to obtain probabilities, which can be used to classify the various categories of the ECG data (or transformed ECG data) and then reported textually.
  • the outputs can also include classified feature data indicating risk categories of no risk, low 7 risk, medium risk, and high risk.
  • a sound transformer-based model can be used.
  • the ECG data and/or transformed ECG data may be converted to an audio data format before being input to the sound transformer-based model.
  • these audio data can be tokenized and embedded into one-dimensional embedded vectors that are applied to the sound transformer-based model. Additional audio data (e.g., stethoscope recordings) may also be input to the sound transformer-based model.
  • FIGS. 3 and 4 an example of a wearable device 400 for recording ECG or other physiological data and/or generating classified feature data in accordance with some embodiments is shown.
  • the wearable device 400 can include a device that is configured to be worn on a user’s wrist or limb (e.g., a smart watch, a band, a bracelet), placed on a user’s skin (e.g., a wearable patch), or worn as an article of clothing (e.g., a shirt).
  • the wearable device 400 can be in communication with an external device 410 and/or a server 412 either directly or indirectly via a network 408.
  • the network 408 may be a long-range wireless network such as the Internet, a local area network (LAN), a wide area network (WAN), or a combination thereof. In other embodiments, the network 408 may be a short-range wireless communication network, and in yet other embodiments, the network 408 may be a wired network using, for example, USB cables. In some embodiments, the network 408 may include both wired and wireless devices and connections. Similarly, the server 412 may transmit information to the external device 410 to be forwarded to the wearable device 400.
  • LAN local area network
  • WAN wide area network
  • the wearable device 400 communicates directly with the external device 410.
  • the wearable device 400 can transmit data (e.g., physiological sensor data, other data collected or generated by the wearable device 400) to the external device 410.
  • the wearable device 400 can receive data (e.g., settings, machine learning algorithm parameters, firmware updates, etc.) from the external device 410.
  • the wearable device 400 bypasses the external device 410 to access the network 408 and communicate with the server 412 via the network 408.
  • the wearable device 400 is equipped with a long-range transceiver instead of or in addition to a short-range transceiver. In such embodiments, the wearable device 400 communicates directly with the server 412 or with the server 412 via the network 408 (in either case, bypassing the external device 410).
  • the wearable device 400 may communicate directly with both the server 412 and the external device 410.
  • the external device 410 may, for example, generate a graphical user interface to facilitate control and programming of the wearable device 400 while the server 412 may store and analyze larger amounts of data (e.g., training data, trained machine learning models and parameters) for future programming or operation of the wearable device 400.
  • the wearable device 400 may communicate directly with the server 412 without utilizing a short-range communication protocol with the external device 410.
  • the wearable device 400 communicates with the external device 410.
  • the external device 410 may include, for example, a smartphone, a tablet computer, a cellular phone, a laptop computer, a smart watch, another wearable device, and the like.
  • the wearable device 400 communicates with the external device 410, for example, to transmit at least a portion of the physiological sensor data or other data collected or generated by the wearable device 400, which in some instances may include classified feature data generated by the wearable device 400.
  • the external device 410 may include a short-range transceiver to communicate with the wearable device 400, and a long-range transceiver to communicate with the server 412.
  • the wearable device 400 can also include a transceiver to communicate with the external device 410 via, for example, a short-range communication protocol such as Bluetooth®.
  • the external device 410 bridges the communication between wearable device 400 and the server 412. That is, the wearable device 400 transmits data to the external device 410, and the external device 410 forwards the data from wearable device 400 to the server 412 over the network 408.
  • the server 412 includes a server electronic control assembly having a server electronic processor, a server memory, and a transceiver.
  • the transceiver allows the server 412 to communicate with the wearable device 400, the external device 410, or both.
  • the server electronic processor receives physiological sensor data or other data collected with or generated by the wearable device 400, and stores the received data in the server memory'.
  • the server 412 may maintain a database (e.g., on the server memory) for containing physiological data, training data, trained machine learning controls (e.g.. trained machine learning models and/or algorithms), artificial intelligence controls (e.g., rules and/or other control logic implemented in an artificial intelligence model and/or algorithm), and the like.
  • the server 412 may be a distributed device in which the server electronic processor and server memory are distributed among two or more units that are communicatively coupled (e.g.. via the network 408).
  • the wearable device 400 includes an electronic controller 420, a power source 452, a wireless communication device 460, and one or more sensors 472, among other components. In some embodiments, the wearable device 400 may not include a wireless communication device 460.
  • the electronic controller 420 can include an electronic processor 430 and memory 440.
  • the electronic processor 430 and the memory 440 can communicate over one or more control buses, data buses, etc., which can include a device communication bus 476.
  • the control and/or data buses are shown generally in FIG. 4 for illustrative purposes. The use of one or more control and/or data buses for the interconnection between and communication among the various modules, circuits, and components would be known to a person skilled in the art.
  • the electronic processor 430 can be configured to communicate with the memory 440 to store data and retrieve stored data.
  • the electronic processor 430 can be configured to receive instructions 442 and data from the memory 440 and execute, among other things, the instructions 442.
  • the electronic processor 430 executes instructions 442 stored in the memory' 440.
  • the electronic controller 420 coupled with the electronic processor 430 and the memory 440 can be configured to perform the methods described herein (e.g., the process 100 of FIG. 1).
  • the memory 440 can include read-only memory' (ROM), random access memory (RAM), other non-transitory computer-readable media, or a combination thereof.
  • the memory' 440 can include instructions 442 for the electronic processor 430 to execute.
  • the instructions 442 can include software executable by the electronic processor 430 to enable the electronic controller 420 to, among other things, receive data and/or commands, transmit data, and the like.
  • the software can include, for example, firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions.
  • the electronic processor 430 is configured to retrieve from memory 440 and execute, among other things, instructions related to the control processes and methods described herein.
  • the electronic processor 430 is also configured to store data on the memory 440 including physiological sensor data, classified feature data, and the like.
  • the w'earable device 400 receives electrical power from the pow'er source 452, which as an example may include a battery. Additionally or alternatively, the power source 452 may include an external power source (e.g., a wall outlet when the wearable device 400 is connected to a wall outlet).
  • the pow'er source 452 may include a battery. Additionally or alternatively, the power source 452 may include an external power source (e.g., a wall outlet when the wearable device 400 is connected to a wall outlet).
  • the wearable device 400 may also include a wireless communication device 460.
  • the wireless communication device 460 is coupled to the electronic controller 420 (e.g., via the device communication bus 476).
  • the wireless communication device 460 may include, for example, a radio transceiver and antenna, a memory, and an electronic processor.
  • the wireless communication device 460 can further include a global navigation satellite system (GNSS) receiver configured to receive signals from GNSS satellites (e.g., global positioning system (GPS) satellites), land- based transmitters, etc.
  • GNSS global navigation satellite system
  • the radio transceiver and antenna operate together to send and receive wireless messages to and from the external device 410, the server 412. and the like.
  • the memory of the wireless communication device 460 stores instructions to be implemented by the electronic processor of the wireless communication device 460 and/or may store data related to communications between the w earable device 400 and the external device 410 and/or the server 412.
  • the electronic processor for the wireless communication device 460 controls wireless communications between the wearable device 400 and the external device 410 and/or the server 412. For example, the electronic processor of the wireless communication device 460 buffers incoming and/or outgoing data, communicates with the electronic processor 430, and determines the communication protocol and/or settings to use in wireless communications.
  • the wireless communication device 460 is a Bluetooth® controller.
  • the Bluetooth® controller communicates with the external device 410 and/or the server 412 employing the Bluetooth® protocol. In such embodiments, therefore, the external device 410 and/or the server 412 and the wearable device 400 are within a communication range (i.e., in proximity) of each other while they exchange data.
  • the wireless communication device 460 communicates using other protocols (e.g., Wi-Fi, cellular protocols, a proprietary protocol, etc.) over a different type of wireless network.
  • the wireless communication device 460 may be configured to communicate via Wi-Fi through a wide area network such as the Internet or a local area network, or to communicate through a piconet (e.g.. using infrared or NFC communications).
  • the communication via the wireless communication device 460 may be encrypted to protect the data exchanged between the wearable device 400 and the external device 410 and/or the server 412 from third parties.
  • the wireless communication device 460 exports physiological sensor data or other data collected with or generated by the wearable device 400 (e.g.. classified feature data).
  • the wireless communication device 460 also enables the wearable device 400 to sync or otherwise communicate data with other devices, such as an external device 410 that is configured as a smart watch or smartphone.
  • the wearable device 400 can send and receive additional health information for the user (e.g.. bysyncing with a heartrate monitor, smart watch, or other wearable device).
  • the wearable device 400 can include or be coupled to one or more physiological sensors 472.
  • the physiological sensors 472 can include ECG leads, electrodes, or other conductive elements capable of measuring electrophysiology 7 signals. That is, in some embodiments, the physiological sensors 472 are capable of recording or otherwise measuring ECG data.
  • the physiological sensors 472 can implement various lead or electrode configurations, including a 12-lead configuration, a 6-lead configuration, a 3-lead configuration, a 1-lead configuration, or the like. Additionally or alternatively, the physiological sensors 472 can also record or measure other electrophysiology- signals, such as electromyography (EMG) signals.
  • EMG electromyography
  • the physiological sensors 472 can include additional sensors, such photoplethysmography (PPG) sensors or PPG sensing circuits, temperature sensors or temperature sensing circuits, inertial sensors or inertial sensing circuits (e.g., accelerometers, gyroscopes, magnetometers), a pressure sensor or pressure sensing circuit (e.g., a barometer), or the like.
  • PPG photoplethysmography
  • temperature sensors or temperature sensing circuits e.g., temperature sensors or temperature sensing circuits
  • inertial sensors or inertial sensing circuits e.g., accelerometers, gyroscopes, magnetometers
  • a pressure sensor or pressure sensing circuit e.g., a barometer
  • the wearable device 400 can include one or more inputs 490 (e.g., one or more buttons, switches, touchscreen, and the like) that are coupled to the electronic controller 420 and alloyv a user to interact yvith and control the wearable device 400.
  • the input 490 includes an interactive graphical user interface (GUI) element that enables user interaction with the wearable device 400.
  • GUI graphical user interface
  • the wearable device 400 may include one or more outputs 492 that are also coupled to the electronic controller 420.
  • the output(s) 492 can receive control signals from the electronic controller 420 to present data or information to a user in response, or to generate other visual, audio, or other outputs.
  • the output(s) 492 can generate a visual signal to convey information regarding the physiological sensor data, other health data, generated classified feature data, or the like.
  • the output(s) 492 may include, for example, LEDs or a display screen and may generate various signals indicative of, for example, physiological sensor data, other health data, generated classified feature data, or the like.
  • the output(s) 492 may indicate the detection of a congestive heart failure based on classified feature data generated by inputting ECG data measured by the physiological sensors 472 to a trained neural network or other machine learning model.
  • FIG. 5 illustrates an example of a system 500 for generating classified feature data indicative of congestive heart failure from ECG data in accordance with some embodiments described in the present disclosure.
  • a computing device 550 can receive one or more types of data (e.g.. ECG data, other physiological sensor data, other health data, training data) from data source 502, which may be a wearable device (e.g., wearable device 400).
  • data source 502 which may be a wearable device (e.g., wearable device 400).
  • computing device 550 can execute at least a portion of a congestive heart failure detection system 504 to detect the presence or quantify the likelihood of congestive heart failure from ECG data received from the data source 502.
  • the computing device 550 can communicate information about data received from the data source 502 to a server 552 over a communication network 554, which can execute at least a portion of the congestive heart failure detection system 504.
  • the server 552 can return information to the computing device 550 (and/or any other suitable computing device) indicative of an output of the congestive heart failure detection system 504.
  • computing device 550 and/or server 552 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on.
  • the computing device 550 may be for example an external device 410 and the server 552 may be the server 412 described above.
  • data source 502 can be any suitable source of physiological sensor data (e.g., ECG data, PPG data), such as an ECG system or a wearable device (e.g., wearable device 400), another computing device (e.g., a server storing physiological sensor data), and so on.
  • the data source 502 may additionally or alternatively include a source of other data, such as patient health data.
  • data source 502 can be local to computing device 550.
  • data source 502 can be incorporated with computing device 550 (e.g., computing device 550 can be configured as part of a device for capturing, scanning, and/or storing data).
  • data source 502 can be connected to computing device 550 by a cable, a direct wireless link, and so on. Additionally or alternatively, in some embodiments, data source 502 can be located locally and/or remotely from computing device 550, and can communicate data to computing device 550 (and/or server 552) via a communication network (e.g., communication network 554).
  • a communication network e.g., communication network 554.
  • communication network 554 can be any suitable communication network or combination of communication networks.
  • communication network 554 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on.
  • Wi-Fi network which can include one or more wireless routers, one or more switches, etc.
  • peer-to-peer network e.g., a Bluetooth network
  • a cellular network e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.
  • communication network 554 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks.
  • Communications links shown in FIG. 5 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on.
  • computing device 550 can include a processor 602, a display 604, one or more inputs 606, one or more communication systems 608, and/or memory 610.
  • processor 602 can be any suitable hardware processor or combination of processors, such as a central processing unit (CPU), a graphics processing unit (GPU), and so on.
  • display 604 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on.
  • inputs 606 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 608 can include any suitable hardware, firmware, and/or software for communicating information over communication network 554 and/or any other suitable communication networks.
  • communications systems 608 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 608 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory' 610 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 602 to present content using display 604. to communicate with server 552 via communications system(s) 608, and so on.
  • Memory 610 can include any suitable volatile memory, non-volatile memory . storage, or any suitable combination thereof.
  • memory 610 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 610 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 550.
  • processor 602 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 552, transmit information to server 552. and so on.
  • content e.g., images, user interfaces, graphics, tables
  • the processor 602 and the memory 610 can be configured to perform the methods described herein (e.g., the method of FIG. 1, the method of FIG. 2).
  • server 552 can include a processor 612, a display 614, one or more inputs 616, one or more communications systems 618, and/or memory 7 620.
  • processor 612 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • display 614 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on.
  • inputs 616 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 618 can include any suitable hardware, firmware, and/or software for communicating information over communication network 554 and/or any other suitable communication networks.
  • communications systems 618 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 618 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 7 620 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 612 to present content using display 614, to communicate with one or more computing devices 550, and so on.
  • Memory 620 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 7 620 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 620 can have encoded thereon a server program for controlling operation of server 552.
  • processor 612 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 550, receive information and/or content from one or more computing devices 550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
  • information and/or content e.g., data, images, a user interface
  • processor 612 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 550, receive information and/or content from one or more computing devices 550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
  • the server 552 is configured to perform the methods described in the present disclosure.
  • the processor 612 and memory' 620 can be configured to perform the methods described herein (e.g., the method of FIG. 1, the method of FIG. 2).
  • data source 502 can include a processor 622, one or more inputs 624, one or more communications systems 626, and/or memory 628.
  • processor 622 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • the one or more inputs 624 are generally configured to acquire data, and can include electrodes, leads, other conductive elements configured to measure electrophysiology signals, and the like.
  • one or more inputs 624 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of a physiological sensor.
  • one or more portions of the one or more inputs 624 can be removable and/or replaceable.
  • data source 502 can include any suitable inputs and/or outputs.
  • data source 502 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on.
  • data source 502 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
  • communications systems 626 can include any suitable hardware, firmware, and/or software for communicating information to computing device 550 (and. in some embodiments, over communication network 554 and/or any other suitable communication networks).
  • communications systems 626 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 626 can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g.. VGA. DVI video. USB, RS-232, etc.). Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 628 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 622 to control the one or more image acquisition systems 624. and/or receive data from the one or more image acquisition systems 624; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices 550; and so on.
  • Memory 628 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 628 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 628 can have encoded thereon, or otherwise stored therein, a program for controlling operation of data source 502.
  • processor 622 can execute at least a portion of the program to generate classified feature data, transmit information and/or content (e.g., ECG data, classified feature data, alerts, messages) to one or more computing devices 550, receive information and/or content from one or more computing devices 550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • information and/or content e.g., ECG data, classified feature data, alerts, messages
  • any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein.
  • computer readable media can be transitory or non-transitory.
  • non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g.. random access memory (RAM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • RAM random access memory
  • EPROM electrically programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pathology (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Cardiology (AREA)
  • Evolutionary Computation (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

Electrocardiography (ECG) data acquired from a subject and processed with a machine learning model to determine whether the subject has congestive heart failure. The machine learning model generates classified feature data that are indicative of the presence and/or likelihood of congestive heart failure, a particular type of congestive heart failure, a particular subtype of congestive heart failure, a stage of congestive heart failure, or the like.

Description

DETECTING CONGESTIVE HEART FAILURE FROM ELECTROCARDIOGRAPHY DATA USING MACHINE LEARNING
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 63/638,837, filed on April 25, 2024, and entitled “DETECTING CONGESTIVE HEART FAILURE FROM ELECTROCARDIOGRAPHY DATA USING MACHINE LEARNING,” which is herein incorporated by reference in its entirety.
STATEMENT OF FEDERALLY SPONSORED RESEARCH
[0002] This invention was made with government support under ULI TR001436 awarded by the National Institutes of Health. The government has certain rights in the invention.
BACKGROUND
[0003] Electrocardiography (ECG) systems measure electrophysiological signals of cardiac activity in a subject. For example, voltages of the electrical activity' of the heart are measured using electrodes places on the subject’s skin. In a conventional 12-lead ECG, electrodes are placed on the subject’s limbs and chest. Although ECG data are representative of electrical activity of the heart, it is possible that additional information about the subject’s health can be derived or estimated from the ECG data.
SUMMARY OF THE DISCLOSURE
[0004] The present disclosure addresses the aforementioned drawbacks by providing a method for generating classified feature data indicative of congestive heart failure in a subject. The method includes accessing electrocardiography (ECG) data using a computer system, where the ECG data are acquired from a subject. A machine learning model is also accessed with the computer system, where the machine learning model has been trained on training data to detect a presence of congestive heart failure based on ECG data. The ECG data are applied to the machine learning model using the computer system, generating output as classified feature data indicative of congestive heart failure . The classified feature data are output to a user. BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIG. 1 is a flowchart seting forth the steps of an example method for generating classified feature data indicative of congestive heart failure in a subject by applying electrocardiography (ECG) data to a machine learning model, such as a neural network.
[0006] FIG. 2 is a flowchart seting forth the steps of an example method for training a machine learning model, such as a neural network, to detect the presence or likelihood of a subject having congestive heart failure based on ECG data obtained from the subject.
[0007] FIG. 3 illustrates an example wearable device that can be used to record physiological data, such as ECG data, and to generate classified feature data indicative of congestive heart failure.
[0008] FIG. 4 is a block diagram of example components that can implement the wearable device of FIG. 3.
[0009] FIG. 5 illustrates an example system for detecting congestive heart failure based on ECG data.
[0010] FIG. 6 is a block diagram of example components that can implement the system of FIG. 5.
DETAILED DESCRIPTION
[0011] Described here are systems and methods for detecting congestive heart failure in a subject based on analysis of electrocardiograph (ECG) data acquired from the subject. In general, the ECG data obtained from the subject, which may be standard 12-lead ECG data, are applied to a neural network or other machine learning algorithm, which generates output data as classified feature data indicating the presence and/or likelihood of congestive heart failure. As a result, the systems and methods described in the present disclosure provide a point- of-care test for detecting congestive heart failure in a subject, which does not require specialized imaging or other clinical tests.
[0012] The systems and methods described in the present disclosure utilize a neural network or other machine learning or Al algorithm to detect, identify, or otherwise characterize subtle paterns in ECG data that are indicative of the presence of congestive heart failure. Further, the systems and methods are capable of differentiating various levels of congestive heart failure, various functional defects associated with congestive heart failure, and so on. As a result, the disclosed systems and methods can be used as an initial screening tool in a hospital or clinical-based seting and translated to a point-of-care test that can be delivered through a portable or potentially a wearable device.
[0013] In some embodiments, the disclosed systems and methods utilize a machine learning model to detect and characterize congestive heart failure from a standard 12-lead ECG at the point of injury. In these instances, the disclosed systems and methods provide a cost- effective. non-invasive, low-risk intervention to patients or other individual users that can augment existing methods to detect and characterize congestive heart failure from a standard 12-lead ECG. Additionally or alternatively, the disclosed systems and methods can be implemented using a wearable patch or other wearable device with one or more channels and wearable elements, including shirts, watches, bands, and bracelets with conductive elements capable of recording physiologic signals. In still other embodiments. ECG data can be collected from implanted devices such as loop recorders, pacemakers, or defibrillators. In yet other embodiments, ECG data can be collected from contactless sensors, such as RF-based sensors. In each of these instances, classified feature data can be generated from the recorded ECG data, processed by a machine learning model or algorithm to generate classified feature data indicative of congestive heart failure, and allow for notifying the user or clinicians that conditions of congestive heart failure, or other functional defects related to congestive heart failure, have been detected (e.g., via an alert or message).
[0014] Inputs from various different data sources can be integrated into a single output to provide a scalable and automated means for clinicians to analyze whether congestive heart failure may be present by analyzing only a standard 12-lead or ECG data obtained with other physiological sensors (e g., a wearable device, an implanted device, a contactless RF-based sensor).
[0015] In general, the ECG data can be input to one or more artificial intelligence and/or machine learning (AI/ML) models to generate classified feature data indicative of the presence of a neurological condition in the subject from whom the ECG data were obtained. Any suitable AI/ML or other computational modelling can be used for the analysis of the ECG data, including deep learning, a generative adversarial network (GAN), a convolutional neural network (CNN), a large language modeling (LLM), foundation models, Riffusion, anomaly detection, diffusion, etc. Advantageously, classical machine learning algorithms and models (e.g., random forest, support vector machines, naive Bayes classifiers, nearest neighbors, decision trees, AdaBoost, QDA, Gaussian process, etc.) can also be used in some instances. Additionally or alternatively, these classical machine learning models could be used for the analysis of ECG data and/or related patient health data, or could advantageously be used to initially determine which AI/ML algorithm or model is likely to provide the highest accuracy to develop further models for the analysis of ECG data and/or related patient health data for detecting the presence of a neurological condition in the subject from whom the ECG data were obtained.
[0016] Referring now to FIG. 1. a flowchart is illustrated as setting forth the steps of an example method 100 for generating classified feature data using a suitably trained neural network or other machine learning algorithm. As will be described, the neural network or other machine learning algorithm takes as input ECG data (e.g., 12-lead ECG measurement data) and/or transformed ECG data (e.g., scalogram data, spectrogram data, other N-dimensional data generated from ECG data) and generates output as classified feature data indicative of congestive heart failure in the subject from whom the ECG data were obtained. Additionally or alternatively, the neural network or other machine learning algorithm may take transformed ECG data (e.g., scalogram data, spectrogram data, or other N-dimensional (for N > 2) images, maps, matrices, or data structures generated from ECG data) as an input.
[0017] The method includes accessing ECG data with a computer system, as indicated at step 102. Accessing the ECG data may include retrieving such data from a memory or other suitable data storage device or medium. Alternatively, accessing the ECG data may include acquiring such data with a wearable device or an ECG system (e.g., an ECG measurement system using a 12-lead configuration, or other lead or electrode combination) and transferring or otherwise communicating the data to the computer system, which may be a part of the wearable device or ECG system.
[0018] The ECG data may include ECG signals. Additionally or alternatively, the ECG data may include variables, parameters, or other measurements that are computed, extracted, or otherwise derived from ECG signals. By way of example, the ECG data may include ECG measurements such as ventricular rate in beats per minute (bpm), atrial rate in bpm, P-R interval in milliseconds (ms), QRS duration in ms, Q-T interval in ms, QTC Bazett’s algorithm, P axis, R axis, T axis, QRS count, P-wave onset in median beat, P-wave offset in median beat, Q-onset in median beat, Q-offset in median beat. T-onset in median beat, T-offset in median beat, number of QRS Complexes, QRS duration, QT interval. QT corrected, PR interval, ventricular rate, average R-R Interval, Q-onset (median complex sample point), Q-offset (median complex sample point), P-onset (median complex sample point), P-offset (median complex sample point), T-onset (median complex sample point), QT calculated with the Frederica algorithm, P-wave amplitude at P-onset, P-wave amplitude, P-wave duration, P-wave area, P-wave intrinsicoid (time from P-onset to peak of P). P-prime amplitude, P-prime duration, P-prime area, P-prime intrinsicoid (time from P-onset to peak of P-prime), Q-wave amplitude, Q-wave duration, Q-wave area, Q intrinsicoid (time from Q-onset to peak of Q), R amplitude, R duration, R wave area, R intrinsicoid (time from R-onset to peak of R), S amplitude, S duration, S-wave area. S intrinsicoid (time from Q onset to peak of S), R-prime amplitude. R-prime duration. R-prime wave area, R-prime intrinsicoid (time from Q onset to peak of R-prime), S- prime amplitude, S-prime duration, S-prime wave area, S intrinsicoid (time from Q onset to peak of S-prime), STJ point, end of QRS point amplitude, STM point, middle of the ST segment amplitude, STE point, end of ST segment amplitude, maximum of STJ amplitude, maximum of STM amplitude, maximum of STE amplitude, minimum of STJ amplitude, minimum of STM amplitude, special T-wave amplitude, total QRS area, QRS deflection, maximum R amplitude (R or R-prime), maximum S amplitude (S or S prime), T amplitude, T duration, T- wave area, T intrinsicoid (time from STE to peak of T), T-prime amplitude, T-prime duration, T-prime area, T-prime intrinsicoid (time from STE to peak of T), T amplitude at T offset, P- wave area (includes P and P-prime), QRS area. T-wave area (includes T and T-prime). QRS intriniscoid, RR interval, PP interval, and so on.
[0019] By way of example, a 12-lead ECG system can include a I Lateral lead (also referred to as a l lead), a II Inferior lead (also referred to as a II lead), a III Inferior lead (also referred to as a III lead), an aVR lead, an aVL Lateral lead (also referred to as an aVL lead), an aVF Inferior lead (also referred to as an aVF lead), a V 1 Septal lead (also referred to as a V 1 lead), a V2 Septal lead (also referred to as a V2 lead), a V3 Anterior lead (also referred to as a V3 lead), a V4 Anterior lead (also referred to as a V4 lead), a V5 Lateral lead (also referred to as aV5 lead), and a V6 Lateral lead (also referred to as aV6 lead). Additionally or alternatively, the ECG system can implement fewer than 12 leads, such as a single lead, six leads (e.g.. all limb leads: I, II, III,avF,avR, AVL), or the like.
[0020] In some examples, the ECG data may be obtained using an RF-based sensor device. These RF-based sensors are capable of measuring ECG signals in addition to other biophysical signals (e.g., heart beats, respiratory’ rates) using transmitted RF waves, which in some instances may include RF waves transmitted according to Wi-Fi or other wireless network protocol. Such sensors enable contactless measurement of ECG data or other biophysical data. The RF-based sensors can be implemented in a standalone device, integrated into a mobile device (e.g., a smartphone, a tablet computer), integrated into a wearable device (e.g., a smartwatch, a fitness tracker, a wearable patch, a band, a bracelet), integrated into other wearables (e.g., a shirt or other wearable garment with conductive elements capable of recording physiologic signals), integrated into an implanted device (e g., loop recorders, pacemakers, defibrillators), integrated into other medical devices (e.g., digital stethoscopes), or integrated into another device or system (e.g., an automobile or other vehicle, such as an autonomous vehicle that can transport an individual to a clinic or hospital if a condition is detected). The biophysical signals captured from radiofrequencies and/or Wi-Fi can then be used with the systems and methods described in the present disclosure to improve on the diagnosis, management, prognostics, and/or treatment of congestive heart failure.
[0021] Furthermore, radiofrequency signals can detect respiratory rates and other signals that can then be synchronously combined with ECG data, phonocardiogram (PCG) data, and/or continuous arterial blood pressure waveform data and used in the neural networks or other Al models described in the present disclosure.
[0022] Additionally or alternatively, accessing the ECG data may include accessing transformed ECG data. Accessing the transformed ECG data may include retrieving such data from a memory or other suitable data storage device or medium. Additionally or alternatively, accessing the transformed ECG data may include generating such data with a computer system. For instance, the transformed ECG data may be generated from ECG data acquired from the subject. As noted above, transformed ECG data may include scalogram data, spectrogram data, or other N-dimensional data generated from ECG data.
[0023] As a non-limiting example, scalogram data may include one or more scalograms generated from ECG data. In general, a scalogram includes an image, map, or other N- dimensional matrix or data structure depicting the time-frequency distribution of ECG data. In some implementations, a scalogram may be generated by computing a continuous wavelet transform (CWT) of the ECG data and constructing the scalogram based on the CWT coefficients. As an example, the scalogram may be depicted as a heat map or other image.
[0024] As another example, spectrogram data may include one or more spectrograms generated from ECG data. In general, a spectrogram includes an image, map, or other N- dimensional matrix or data structure depicting a spectrum of frequencies of ECG signals in the ECG data as they vary with time. In some implementations, a spectrogram may be generated by computing a Fourier transform of the ECG signals in the ECG data and constructing the spectrogram based on the coefficients of the Fourier transform. As an example, the scalogram may be depicted as a heat map or other image.
[0025] Additionally or alternatively, other images, maps, or other N-dimensional (for N > 2) matrices or other data structures may be generated from the ECG signals in the ECG signal data. For example, other N-dimensional matrices may include Gramian angular field maps, recurrence plot maps, and/or Markov transition field maps. These images could also be fused by combining scalogram, spectrogram, Gramian angular field, recurrence plot, Markov transition field, and/or other N-dimensional images, maps, or matrices to form multimodal image fusion data and multimodal feature fusion data.
[0026] Additionally or alternatively, other physiological data can be accessed, including other electrophysiology data (e.g., EEG, data, EMG data), PPG data, etc. In some cases, other physiological data may include echocardiogram data, such as echocardiograms, echocardiogram findings, echocardiogram variables, or the like.
[0027] In some cases, patient characteristics derived from medical imaging, ECG data, other physiological data, or the like, may also be accessed. Such patient characteristics may include data derived from a cardiac catheterization procedure, such as presence of coronary artery disease (CAD), occlusions, thrombolysis in myocardial infarction (TIMI) flow, or the like.
[0028] In still other examples, additional data may be accessed, such as patient health data. The patient health data may include data stored in, retrieved from, extracted from, or otherwise derived from the patient’s electronic medical record (EMR) and/or electronic health record (EHR). The patient health data can include unstructured text, questionnaire response data, clinical laboratory data, histopathology data, genetic sequencing, medical imaging, and other such clinical data types. Examples of clinical laboratory data and/or histopathology data can include genetic testing and laboratory information, such as performance scores, lab tests, pathology' results, prognostic indicators, date of genetic testing, testing method used, and so on.
[0029] Patient health data can include a set of clinical features associated with information derived from clinical records of a patient, which can include records from family members of the patient. These clinical features and data may be abstracted from unstructured clinical documents, EMR, EHR, or other sources of patient history. Such data may include patient symptoms, diagnosis, treatments, medications, therapies, responses to treatments, laboratory testing results, medical history, geographic locations of each, demographics, or other features of the patient which may be found in the patient’s EMR and/or EHR. For example, features derived from structured, curated, and/or EMR or EHR data may include clinical features such as diagnoses; symptoms; therapies; outcomes; patient demographics, such as patient name, date of birth, gender, and/or ethnicity; diagnosis dates for cancer, illness, disease, or other physical or mental conditions; personal medical history; family medical history; clinical diagnoses, such as date of initial diagnosis; and the like. Additionally, the patient health data may also include features such as treatments and outcomes, such as line of therapy, therapy groups, clinical trials, medications prescribed or taken, surgeries, imaging, adverse effects, and associated outcomes.
[0030] The patient health data may also include measurement data collected from wearable devices (e.g., physiological measurements or other data recorded with a wearable device). Physiological measurements that may be recorded with a wearable device include heart rate, temperature, or other physical parameters.
[0031] The patient health data may also include epidemiological data on the prevalence and incidence of relevant diseases, such as weekly observed incidence of new cardiac disease cases, which may change over time. This epidemiological data can be sourced from public health records, patient registries, and other relevant databases. Integrating these disease trends into the model can help ensure that the model is not only learning from the ECG signals themselves, but also from the shifting cardiac disease landscape, thereby improving its ability to capture evolving patterns in ECG readings tied to specific health conditions.
[0032] A trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system, as indicated at step 104. In general, the neural network is trained, or has been trained, on training data in order to detect, identify, or otherwise characterize patterns in ECG data and/or transformed ECG data that are indicative of congestive heart failure.
[0033] Accessing the trained neural network may include accessing network parameters (e.g., weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data. In some instances, retrieving the neural network can also include retrieving, constructing, or otherwise accessing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hyperparameters for layers) may be retrieved, selected, constructed, or otherwise accessed.
[0034] An artificial neural network generally includes an input layer, one or more hidden layers (or nodes), and an output layer. Typically, the input layer includes as many nodes as inputs provided to the artificial neural network. The number (and the type) of inputs provided to the artificial neural network may vary based on the particular task for the artificial neural network.
[0035] The input layer connects to one or more hidden layers. The number of hidden layers varies and may depend on the particular task for the artificial neural network. Additionally, each hidden layer may have a different number of nodes and may be connected to the next layer differently. For example, each node of the input layer may be connected to each node of the first hidden layer. The connection between each node of the input layer and each node of the first hidden layer may be assigned a weight parameter. Additionally, each node of the neural network may also be assigned a bias value. In some configurations, each node of the first hidden layer may not be connected to each node of the second hidden layer. That is, there may be some nodes of the first hidden layer that are not connected to all of the nodes of the second hidden layer. The connections between the nodes of the first hidden layers and the second hidden layers are each assigned different weight parameters. Each node of the hidden layer is generally associated with an activation function. The activation function defines how the hidden layer is to process the input received from the input layer or from a previous input or hidden layer. These activation functions may vary and be based on the type of task associated with the artificial neural network and also on the specific type of hidden layer implemented.
[0036] Each hidden layer may perform a different function. For example, some hidden layers can be convolutional hidden layers which can, in some instances, reduce the dimensionality of the inputs. Other hidden layers can perform statistical functions such as max pooling, which may reduce a group of inputs to the maximum value; an averaging layer; batch normalization; and other such functions. In some of the hidden layers each node is connected to each node of the next hidden layer, which may be referred to then as dense layers. Some neural networks including more than, for example, three hidden layers may be considered deep neural networks.
[0037] The last hidden layer in the artificial neural network is connected to the output layer. Similar to the input layer, the output layer typically has the same number of nodes as the possible outputs.
[0038] The ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data are then input to the one or more trained neural networks, generating output as classified feature data, as indicated at step 106. For example, the classified feature data may include a congestive heart failure risk score. The congestive heart failure risk score can provide physicians or other clinicians with a recommendation to consider additional monitoring (e.g., cardiac monitoring, respiratory monitoring) for subjects whose ECG data indicate the likelihood of the subject suffering from congestive heart failure. [0039] As another example, the classified feature data may indicate the probability for a particular classification (i.e.. the probability that the ECG data include patterns, features, or characteristics indicative of congestive heart failure), such as congestive heart failure being present, congestive heart failure not being present, or the like. The classified feature data may also indicate whether a subject’s vitals or conditions are improving, worsening, or staying the same over a particular time span.
[0040] Additionally or alternatively, the classified feature data may classify the ECG data as indicating a particular type of congestive heart failure, such as congestive heart failure with reduced ejection fraction, congestive heart failure with preserved ejection fraction, systolic congestive heart failure, diastolic congestive heart failure, left-sided congestive heart failure, right-sided congestive heart failure, biventricular failure, or the like. In these instances, the classified feature data can differentiate between different types of congestive heart failure and/or between different subtypes of congestive heart failure. For example, the classified feature data can differentiate one type of congestive heart failure from other types (e.g., leftsided congestive heart failure versus right-sided congestive heart failure) in addition to differentiating a particular subtypes (e.g., congestive heart failure with reduced ejection fraction, congestive heart failure with preserved ejection fraction, left-sided systolic congestive heart failure, left-sided diastolic congestive heart failure) from other subtypes.
[0041] In other examples, the classified feature data can indicate a stage of congestive heart failure. For instance, the classified feature data may indicate whether the ECG data include patterns, features, or characteristics indicative of Stage I congestive heart failure, Stage II congestive heart failure. Stage III congestive heart failure, or Stage IV congestive heart failure.
[0042] Additionally or alternatively, the classified feature data can differentiate between different underlying causes and/or precipitating factors of congestive heart failure, such as congestive heart failure caused by coronary artery disease, congestive heart failure caused by hypertension, congestive heart failure caused by valvular heart disease, congestive heart failure caused by myocarditis, or the like.
[0043] In still other embodiments, the classified feature data may indicate a severity of congestive heart failure. For example, the classified feature data may include a severity score that quantifies a severity of congestive heart failure. The classified feature data may also indicate a CHF score that quantifies heart function, a symptom of congestive heart failure, an underlying cause of congestive heart failure, and/or a precipitating factor of congestive heart failure. For example, the CHF score may quantify ejection fraction (EF) for the subject.
[0044] The classified feature data generated by inputting the ECG data to the trained neural network(s) can then be displayed to a user, stored for later use or further processing, or both, as indicated at step 108.
[0045] Referring now to FIG. 2, a flowchart is illustrated as setting forth the steps of an example method 200 for training one or more neural networks (or other suitable machine learning algorithms) on training data, such that the one or more neural networks are trained to receive input as ECG data in order to generate output as classified feature data indicative of congestive heart failure.
[0046] In general, the neural network(s) can implement any number of different neural network architectures. For instance, the neural network(s) could implement a convolutional neural network, a residual neural network, or the like. In some instances, the neural network(s) may implement deep learning. Alternatively, the neural network(s) could be replaced with other suitable machine learning or artificial intelligence algorithms, such as those based on supervised learning, unsupervised learning, deep learning, ensemble learning, dimensionality reduction, and so on. In some implementations, a large language model, generative pre-trained transformer model, or other foundation model may also be used, as described in more detail below.
[0047] The method includes accessing training data with a computer system, as indicated at step 202. Accessing the training data may include retrieving such data from a memory or other suitable data storage device or medium. Alternatively, accessing the training data may include acquiring such data with a wearable device, a network of wearable devices, an RF-based sensor, an ECG system, or the like, and transferring or otherwise communicating the data to the computer system.
[0048] In general, the training data can include ECG data and/or transformed ECG data obtained from a plurality of subjects. The ECG data may be obtained using 12-lead configurations, or fewer leads (e g., single lead, three leads, six leads, and the like). The transformed ECG data may include scalogram data, spectrogram data, or other N-dimensional data generated from ECG data acquired from the plurality' of subjects. Additionally, the training data may include other data, such as patient health data or other health information collected from the subjects. In some embodiments, the training data may include ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data that have been labeled (e.g., labeled as containing patterns, features, or characteristics indicative of congestive heart failure; labeled as being collected from a subject having a particular type, subtype, and/or stage of congestive heart failure; and the like).
[0049] The method can include assembling training data from ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data using a computer system. This step may include assembling the ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data into an appropriate data structure on which the machine learning algorithm can be trained. Assembling the training data may include assembling ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data, segmented ECG data and/or transformed ECG data, and other relevant data. For instance, assembling the training data may include generating labeled data and including the labeled data in the training data. Labeled data may include ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data, segmented ECG data and/or transformed ECG data, or other relevant data that have been labeled as belonging to, or otherwise being associated with, one or more different classifications or categories. For instance, labeled data may include ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data, segmented ECG data, and/or segmented transformed ECG data that have been labeled as being associated with a subject having congestive heart failure, a type of congestive heart failure, a subty pe of congestive heart failure, and/or a stage of congestive heart failure. As a non-limiting example, ECG data can labeled based on left ventricular ejection fraction data as normal (LVEF 50% to 70%), mild dysfunction (LVEF 40% to 49%). moderate dysfunction (LVEF 30% to 39%), or severe dysfunction (LVEF less than 30%).
[0050] In some instances, synthetic training data may be generated. As one nonlimiting example, techniques such as diffusion models can be modified to create realistic ECG data, transformed ECG data, other physiological data, patient characteristics, and/or patient health data from text prompts. Synthetic training data produced by these models can augment real training data to increase diagnostic accuracy. This form of synthetic data can help solve the detection and treatment of uncommon diseases through Al learning problems for which training data is scarce.
[0051] One or more neural networks (or other suitable machine learning algorithms) are trained on the training data, as indicated at step 204. In general, the neural network can be trained by optimizing network parameters (e.g., weights, biases, or both) based on minimizing a loss function. As one non-limiting example, the loss function may be a mean squared error loss function, a cross-entropy loss function, or the like.
[0052] Training a neural network may include initializing the neural network, such as by computing, estimating, or otherwise selecting initial network parameters (e.g.. weights, biases, or both). During training, an artificial neural netw ork receives the inputs for a training example and generates an output using the bias for each node, and the connections between each node and the corresponding weights. For instance, training data can be input to the initialized neural network, generating output as classified feature data. The artificial neural network then compares the generated output with the actual output of the training example in order to evaluate the quality of the classified feature data. For instance, the classified feature data can be passed to a loss function to compute an error. The current neural netw ork can then be updated based on the calculated error (e.g., using backpropagation methods based on the calculated error). For instance, the current neural network can be updated by updating the network parameters (e g., weights, biases, or both) in order to minimize the loss according to the loss function. The training continues until a training condition is met. The training condition may correspond to, for example, a predetermined number of training examples being used, a minimum accuracy threshold being reached during training and validation, a predetermined number of validation iterations being completed, and the like. When the training condition has been met (e g., by determining whether an error threshold or other stopping criterion has been satisfied), the current neural network and its associated netw ork parameters represent the trained neural network. Different types of training processes can be used to adjust the bias values and the weights of the node connections based on the training examples. The training processes may include, for example, gradient descent, Newton’s method, conjugate gradient, quasi -Newton. Levenberg-Marquardt, among others.
[0053] The artificial neural netw ork can be constructed or otherwise trained based on training data using one or more different learning techniques, such as supervised learning, unsupervised learning, reinforcement learning, ensemble learning, active learning, transfer learning, or other suitable learning techniques for neural networks. As an example, supervised learning involves presenting a computer system with example inputs and their actual outputs (e.g., categorizations). In these instances, the artificial neural network is configured to learn a general rule or model that maps the inputs to the outputs based on the provided example input- output pairs.
[0054] When training a neural network, or other machine learning model, to receive ECG data from an RF-based sensor, the model may additionally or alternatively be trained to output interval measurements from the input ECG data. For instance, the model may be trained on pairs of RF-based ECG data and 12-lead ECG data, on which interval measurements have been annotated.
[0055] The one or more trained neural networks are then stored for later use, as indicated at step 206. Storing the neural network(s) may include storing network parameters (e.g., weights, biases, or both), which have been computed or otherwise estimated by training the neural network(s) on the training data. Storing the trained neural network(s) may also include storing the particular neural network architecture to be implemented. For instance, data pertaining to the layers in the neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hy perparameters for layers) may be stored.
[0056] To ensure the Al and/or machine learning models remain effective and accurate over time, sequential model updates can be implemented. In these cases, a model is continuously retrained and/or updated using new data from specific time periods. This approach allows for monitoring how well the model may adapt to new changing patterns in ECG data. Additionally or alternatively, this approach allows for assessing it’s the predictive performance of the model on evolving datasets. As a non-limiting example, the model may be initially trained on ECG data from a defined period using the techniques described in the present disclosure. The defined period may be a period of days, weeks, month, years, or other time scales. For example, the defined period may be a period of a few years, such as 2019- 2022.
[0057] Performance metrics, including AUC (i.e. , area under the curve for a receiver operating characteristic (ROC) curve), as well as its derivatives (e.g., sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV)) at a predefined cutoff may be calculated to assess the abi lity of the model to differentiate between normal and abnormal ECG readings.
[0058] Once the model is trained, it may be tested on new, unseen ECG data from a subsequent time period (e.g., subsequent day or days, subsequent week or weeks, subsequent month or months, subsequent year or years, other subsequent time scales). For example, when the model is trained on data from a period of years such as 2019-2022, the pretrained model may be tested on new unseen ECG data from a subsequent year, such as 2023. This testing phase allows for the evaluation of the generalizability of the model. Additionally or alternatively, this testing phase may be used to assess whether the model maintains its predictive accuracy when applied to a different time period. Performance metrics can be composed across the datasets from the different time periods (e.g., 2019-2022 training dataset and 2023 test set, in the described example) to evaluate how well the model adapts to potential shifts in ECG patterns and to identify any degradation in model performance.
[0059] As additional data become available, the model may be retrained using an expanding dataset, for example, including data from 2019 to 2023 to forecast performance for 2024. That is, the subsequent data set used to test the model may be appended or otherwise concatenated with the original training data set and the updated model may then be trained on a newer subsequent data set associated with another subsequent time period. The model will again be tested using the newer subsequent data set (e.g., 2024 data, in the described example), and performance metrics will be recalculated to ensure continued accuracy and robustness of the model. This process may be repeated annually, monthly, weekly, or over other time scales, with the training dataset progressively growing to include more recent data, ensuring that the model stays current with emerging trends in ECG signals and clinical conditions.
[0060] Throughout this process, the aging of the model may be monitored by comparing its ROC curves over time. By visualizing how the ROC curve evolves with each successive test period, any changes in the ability of the model to distinguish between classes can be observed. A decline in the AUC, as a non-limiting example, may signal potential performance issues, while improvements in the AUC may indicate that the model is adapting well to new data. Additionally or alternatively, subgroup analysis can be conducted to assess whether any biases arise as the model encounters different patient populations or demographic shifts over time.
[0061] To maintain consistent performance and mitigate any biases, the model may be periodically fine-tuned, retrained, and adjusted to incorporate the latest data. This iterative process ensures that the model remains aligned with clinical needs and continues to provide reliable predictions for ECG analysis, even as the data evolves.
[0062] In some embodiments, foundation models can be used to additionally or alternatively process the ECG data, transformed ECG data (e.g., scalogram data, spectrogram data, other N-dimensional data generated from ECG data), other physiological data, patient characteristics, and/or patient health data. Foundation models can receive various types of data (e.g., text data, image data, sound data, other ID. 2D, and/or 3D data types) and generate various types of clinically relevant outputs. For example, the foundation models may generate outputs as predictive scores, text outputs, classifications, and so on. As anon-limiting example, text outputs may include answers to questions posed by a clinical user (e.g., medical question answering), interpretive reports of ECG data and/or transformed ECG data, or other text-based reports and/or summaries of the input data. Beyond giving vital measurements and diagnostic capabilities, generating reports across temporal and spatial domains with the ECG data and/or transformed ECG data is advantageous for identifying if a patient’s vitals and/or conditions are improving, worsening, or staying the same.
[0063] As one example, a language model, such as a large language model (LLM), can be used. Large language modeling with ECG data and/or transformed ECG data involves using machine learning models (e.g., deep learning models) to process and analyze ECG data, transformed ECG data, and associated clinical text data. This approach can combine natural language processing (NLP) with computer vision or other multimodal techniques to understand the content of ECG data, transformed ECG data, medical images, or other physiologic signal data and extract relevant information from them. Advantageously, large language modeling can be applied to ECG data, transformed ECG data, and/or medical images to enable automatic analysis of the input data and to extract clinically relevant information, such as the presence of congestive heart failure. This approach can help healthcare professionals make more accurate diagnoses and develop personalized treatment plans for patients.
[0064] One example method for large language modeling with ECG data and/or transformed ECG data uses CNNs to extract features from the input data, which are then fed into a recurrent neural network (RNN) to generate text descriptions of the ECG data and/or transformed ECG data. The RNN can also be used to generate clinical reports based on the input data. Another example approach is to use a transformer-based model, such as a BERT or GPT-4 model, to analyze both the ECG data (or transformed ECG data) and accompanying text data. These models can be pre-trained on large datasets of ECG data and/or transformed ECG data and associated text to improve their accuracy and ability to identify relevant features in the input data.
[0065] As a non-limiting example, the ECG data and/or transformed ECG data can be provided as an input to the LLM. The ECG data and/or transformed ECG data are first tokenized and converted into a numerical format. The tokenized input data may then be applied to an embedding layer to transform each tokenized input into a high-dimensional vector that captures relationships between the ECG data and/or transformed ECG data. In some instances, the embedding layer may additionally or alternatively capture semantic relationships between text data and the ECG data and/or transformed ECG data. The resulting embedded vectors form a one-dimensional sequence that will be input to the LLM. Each element of the embedded vectors corresponds to a token in the tokenized input data.
[0066] Any suitable LLM can be used. As one non-limiting example, the LLM may be based on a recurrent layer model, such as a long short term memory (LSTM) model. As another non-limiting example, the LLM may be based on an attention mechanism (e.g., transformer). For instance, the LLM may be based on a generative pre-trained transformer (GPT) model. In still other examples, the LLM may be based on combinations of such model types.
[0067] GPT is a type of large language model that is pre-trained on a massive corpus of text data and can generate human-like language. Language models can also be trained on data from other modalities (e.g., images, audio recordings, videos, etc.) to enable more diverse capabilities, provide a stronger learning signal, and increase learning speed. The GPT model is based on the transformer architecture, which allows it to process long sequences of text efficiently. GPT can be used in a wide range of natural language processing (NLP) tasks, including language translation, text summarization, and question answering.
[0068] ECG data and transformed ECG data contain valuable information that can aid in the diagnosis and treatment of various medical conditions. By combining GPT-based language modeling with ECG data (or transformed ECG data) analysis, the disclosed systems and methods can analyze the input ECG data and/or transformed ECG data to generate textual reports summarizing the findings. For example, a GPT-based model can be trained on a large corpus of ECG data (and/or transformed ECG data) and medical reports and then used to generate reports for new ECG and/or transformed ECG data. The generated reports can include information such as the location and size of any abnormalities associated with congestive heart failure, as well as recommendations for further testing or treatment.
[0069] These LLMs can also be used to develop a chatbot based on a foundation model that can serve as a physician's assistant to support more accurate diagnosis and tailored therapy selection. These capabilities can improve the accuracy and efficiency of patient care while increasing patient engagement and adherence to therapy. Once diagnosis is determined as a result, the output can be incorporated into the patient’s medical documentation through electronic medical records.
[0070] Once diagnosis is made with these models, clinical documents can be generated as described above. A foundation model can then generate tailored patient education materials and explain their care plan at the appropriate reading level based on the clinical documents. The models can also be used to draft a clinic note in real-time based on the results. As another advantage, the models can also be used to optimize clinic scheduling or to simplify generation of medical codes for billing (e.g., current procedural terminology (CPT) codes), disease surveillance, and even automated follow-up reminders.
[0071] In some instances, the code to these models can be automated and/or updated using an LLM, GPT, or the like. For example, auto-GPT can be used to write and update its own code and execute scripts. This allows the model to recursively debug, develop, and selfimprove. While input data are applied to an auto-GPT-based model, the model can update itself automatically.
[0072] As described herein, multimodal language modeling can be used to receive multiple sources of information to train a language model. In the context of ECG data and transformed ECG data, this can mean combining textual information from clinical notes, laboratories, or reports with visual information from ECG data, transformed ECG data, or other sources, including for example medical images such as x-rays, CT scans, or MRIs. One approach to multimodal language modeling is to use the GPT architecture, which can be effective in a variety of NLP tasks. The GPT architecture is based on a transformer network, which can leam to model long-range dependencies between words in a sentence. To adapt GPT for multimodal language modeling with ECG data and/or transformed ECG data, visual information can be incorporated into the GPT model through pretraining with contrastive learning. This process involves training the model to predict which ECG and text pairs are related, while also ensuring that unrelated pairs are distinguishable from each other. The resulting model can then be fine-tuned for specific tasks, such as ECG data captioning, medical report generation, or disease diagnosis. For example, a model trained on clinical notes and ECG signals or transformed ECG signals could be fine-tuned to generate reports to predict the presence of congestive heart failure or a particular type, subtype, or stage of congestive heart failure based on the ECG data or transformed ECG data.
[0073] As another example, a visual transformer-based model can be used. In these instances, higher-dimensional inputs can be provided to the underlying foundation model. For example, the ECG data and/or transformed ECG data may be tokenized and embedded, as described above, into a 2D or other N-dimensional (N > 2) vector, matrix, or tensor. These higher dimensional input data can then be applied to a suitable foundation model, such as a visual transformer-based model. [0074] Vision-language processing can be improved by using paired samples sharing semantics. For instance, given an image and text pair, the text should describe the image with minimal extraneous detail. Without knowledge of this initial image, temporal information in the text modality (e.g., “condition is improving,’" “condition is worsening,"’ “condition is stable”) could pertain to any image including “condition”, creating vagueness during contrastive training. Vision-language processing implementations can, in some instances, assume alignment between single images and reports, removing temporal content from reports in training data to prevent hallucinations in downstream report generation.
[0075] In other implementations of vision-language processing, temporal information can provide complementary self-supervision by using an existing structure without requiring additional data. Rather than treating all image-report pairs in the dataset as independent, temporal correlations can be used by making previous images available for comparison to a given report. A temporal vision-language processing pre-training framework can be learned from this structure. In some implementations, a multi-image encoder that can handle the absence of previous images and potential spatial misalignment between images across time can be used in this vision-language processing. Prior images can be accounted for where available, thereby removing cross-modal ambiguity. Linking multiple images or datasets has the advantage of improving image and text models and performance on temporal image (or ECG data) classification and report generation. Prefixing the prior report can significantly improve performance. When available during training and fine-tuning, earlier images and labels can also be accounted for.
[0076] As one non-limiting example, a convolutional neural network (CNN) or LLM visual transformer hybrid multi-image encoder can be trained jointly with a text model. The hybrid model can provide improved processing for tasks in both single-image and multi-image structures, achieving performance on disease progression classification, phrase grounding, and document generation, while offering consistent modifications on disease category and sentence-similarity tasks. The similarity7 betw een the ECG data, transformed ECG data, or other signals and text embeddings can be computed to obtain probabilities, which can be used to classify the various categories of the ECG data (or transformed ECG data) and then reported textually. The outputs can also include classified feature data indicating risk categories of no risk, low7 risk, medium risk, and high risk. These outputs can be displayed into risk and probability categories where the model draw s more attention for action or testing if it falls into a high risk category versus normal. [0077] As yet another example, a sound transformer-based model can be used. In these instances, the ECG data and/or transformed ECG data may be converted to an audio data format before being input to the sound transformer-based model. As above, these audio data can be tokenized and embedded into one-dimensional embedded vectors that are applied to the sound transformer-based model. Additional audio data (e.g., stethoscope recordings) may also be input to the sound transformer-based model.
[0078] Referring now to FIGS. 3 and 4, an example of a wearable device 400 for recording ECG or other physiological data and/or generating classified feature data in accordance with some embodiments is shown.
[0079] As shown in FIG. 3, the wearable device 400 can include a device that is configured to be worn on a user’s wrist or limb (e.g., a smart watch, a band, a bracelet), placed on a user’s skin (e.g., a wearable patch), or worn as an article of clothing (e.g., a shirt). The wearable device 400 can be in communication with an external device 410 and/or a server 412 either directly or indirectly via a network 408.
[0080] The network 408 may be a long-range wireless network such as the Internet, a local area network (LAN), a wide area network (WAN), or a combination thereof. In other embodiments, the network 408 may be a short-range wireless communication network, and in yet other embodiments, the network 408 may be a wired network using, for example, USB cables. In some embodiments, the network 408 may include both wired and wireless devices and connections. Similarly, the server 412 may transmit information to the external device 410 to be forwarded to the wearable device 400.
[0081] In some embodiments, the wearable device 400 communicates directly with the external device 410. For example, the wearable device 400 can transmit data (e.g., physiological sensor data, other data collected or generated by the wearable device 400) to the external device 410. Similarly, the wearable device 400 can receive data (e.g., settings, machine learning algorithm parameters, firmware updates, etc.) from the external device 410.
[0082] In some other embodiments, the wearable device 400 bypasses the external device 410 to access the network 408 and communicate with the server 412 via the network 408. In some embodiments, the wearable device 400 is equipped with a long-range transceiver instead of or in addition to a short-range transceiver. In such embodiments, the wearable device 400 communicates directly with the server 412 or with the server 412 via the network 408 (in either case, bypassing the external device 410).
[0083] In some embodiments, the wearable device 400 may communicate directly with both the server 412 and the external device 410. In such embodiments, the external device 410 may, for example, generate a graphical user interface to facilitate control and programming of the wearable device 400 while the server 412 may store and analyze larger amounts of data (e.g., training data, trained machine learning models and parameters) for future programming or operation of the wearable device 400. In other embodiments, the wearable device 400 may communicate directly with the server 412 without utilizing a short-range communication protocol with the external device 410.
[0084] In the illustrated embodiment, the wearable device 400 communicates with the external device 410. The external device 410 may include, for example, a smartphone, a tablet computer, a cellular phone, a laptop computer, a smart watch, another wearable device, and the like. The wearable device 400 communicates with the external device 410, for example, to transmit at least a portion of the physiological sensor data or other data collected or generated by the wearable device 400, which in some instances may include classified feature data generated by the wearable device 400.
[0085] In some embodiments, the external device 410 may include a short-range transceiver to communicate with the wearable device 400, and a long-range transceiver to communicate with the server 412. In the illustrated embodiment, the wearable device 400 can also include a transceiver to communicate with the external device 410 via, for example, a short-range communication protocol such as Bluetooth®. In some embodiments, the external device 410 bridges the communication between wearable device 400 and the server 412. That is, the wearable device 400 transmits data to the external device 410, and the external device 410 forwards the data from wearable device 400 to the server 412 over the network 408.
[0086] The server 412 includes a server electronic control assembly having a server electronic processor, a server memory, and a transceiver. The transceiver allows the server 412 to communicate with the wearable device 400, the external device 410, or both. The server electronic processor receives physiological sensor data or other data collected with or generated by the wearable device 400, and stores the received data in the server memory'. The server 412 may maintain a database (e.g., on the server memory) for containing physiological data, training data, trained machine learning controls (e.g.. trained machine learning models and/or algorithms), artificial intelligence controls (e.g., rules and/or other control logic implemented in an artificial intelligence model and/or algorithm), and the like.
[0087] Although illustrated as a single device, the server 412 may be a distributed device in which the server electronic processor and server memory are distributed among two or more units that are communicatively coupled (e.g.. via the network 408).
[0088] The wearable device 400 includes an electronic controller 420, a power source 452, a wireless communication device 460, and one or more sensors 472, among other components. In some embodiments, the wearable device 400 may not include a wireless communication device 460.
[0089] The electronic controller 420 can include an electronic processor 430 and memory 440. The electronic processor 430 and the memory 440 can communicate over one or more control buses, data buses, etc., which can include a device communication bus 476. The control and/or data buses are shown generally in FIG. 4 for illustrative purposes. The use of one or more control and/or data buses for the interconnection between and communication among the various modules, circuits, and components would be known to a person skilled in the art.
[0090] The electronic processor 430 can be configured to communicate with the memory 440 to store data and retrieve stored data. The electronic processor 430 can be configured to receive instructions 442 and data from the memory 440 and execute, among other things, the instructions 442. In particular, the electronic processor 430 executes instructions 442 stored in the memory' 440. Thus, the electronic controller 420 coupled with the electronic processor 430 and the memory 440 can be configured to perform the methods described herein (e.g., the process 100 of FIG. 1).
[0091] The memory 440 can include read-only memory' (ROM), random access memory (RAM), other non-transitory computer-readable media, or a combination thereof. The memory' 440 can include instructions 442 for the electronic processor 430 to execute. The instructions 442 can include software executable by the electronic processor 430 to enable the electronic controller 420 to, among other things, receive data and/or commands, transmit data, and the like. The software can include, for example, firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions.
[0092] The electronic processor 430 is configured to retrieve from memory 440 and execute, among other things, instructions related to the control processes and methods described herein. The electronic processor 430 is also configured to store data on the memory 440 including physiological sensor data, classified feature data, and the like.
[0093] The w'earable device 400 receives electrical power from the pow'er source 452, which as an example may include a battery. Additionally or alternatively, the power source 452 may include an external power source (e.g., a wall outlet when the wearable device 400 is connected to a wall outlet).
[0094] In some embodiments, the wearable device 400 may also include a wireless communication device 460. In these embodiments, the wireless communication device 460 is coupled to the electronic controller 420 (e.g., via the device communication bus 476). The wireless communication device 460 may include, for example, a radio transceiver and antenna, a memory, and an electronic processor. In some examples, the wireless communication device 460 can further include a global navigation satellite system (GNSS) receiver configured to receive signals from GNSS satellites (e.g., global positioning system (GPS) satellites), land- based transmitters, etc. The radio transceiver and antenna operate together to send and receive wireless messages to and from the external device 410, the server 412. and the like. The memory of the wireless communication device 460 stores instructions to be implemented by the electronic processor of the wireless communication device 460 and/or may store data related to communications between the w earable device 400 and the external device 410 and/or the server 412.
[0095] The electronic processor for the wireless communication device 460 controls wireless communications between the wearable device 400 and the external device 410 and/or the server 412. For example, the electronic processor of the wireless communication device 460 buffers incoming and/or outgoing data, communicates with the electronic processor 430, and determines the communication protocol and/or settings to use in wireless communications. [0096] In some embodiments, the wireless communication device 460 is a Bluetooth® controller. The Bluetooth® controller communicates with the external device 410 and/or the server 412 employing the Bluetooth® protocol. In such embodiments, therefore, the external device 410 and/or the server 412 and the wearable device 400 are within a communication range (i.e., in proximity) of each other while they exchange data. In other embodiments, the wireless communication device 460 communicates using other protocols (e.g., Wi-Fi, cellular protocols, a proprietary protocol, etc.) over a different type of wireless network. For example, the wireless communication device 460 may be configured to communicate via Wi-Fi through a wide area network such as the Internet or a local area network, or to communicate through a piconet (e.g.. using infrared or NFC communications). The communication via the wireless communication device 460 may be encrypted to protect the data exchanged between the wearable device 400 and the external device 410 and/or the server 412 from third parties.
[0097] The wireless communication device 460, in some embodiments, exports physiological sensor data or other data collected with or generated by the wearable device 400 (e.g.. classified feature data). The wireless communication device 460 also enables the wearable device 400 to sync or otherwise communicate data with other devices, such as an external device 410 that is configured as a smart watch or smartphone. In some instances, the wearable device 400 can send and receive additional health information for the user (e.g.. bysyncing with a heartrate monitor, smart watch, or other wearable device).
[0098] The wearable device 400 can include or be coupled to one or more physiological sensors 472. For example, the physiological sensors 472 can include ECG leads, electrodes, or other conductive elements capable of measuring electrophysiology7 signals. That is, in some embodiments, the physiological sensors 472 are capable of recording or otherwise measuring ECG data. The physiological sensors 472 can implement various lead or electrode configurations, including a 12-lead configuration, a 6-lead configuration, a 3-lead configuration, a 1-lead configuration, or the like. Additionally or alternatively, the physiological sensors 472 can also record or measure other electrophysiology- signals, such as electromyography (EMG) signals.
[0099] In other embodiments, the physiological sensors 472 can include additional sensors, such photoplethysmography (PPG) sensors or PPG sensing circuits, temperature sensors or temperature sensing circuits, inertial sensors or inertial sensing circuits (e.g., accelerometers, gyroscopes, magnetometers), a pressure sensor or pressure sensing circuit (e.g., a barometer), or the like.
[00100] In some embodiments, the wearable device 400 can include one or more inputs 490 (e.g., one or more buttons, switches, touchscreen, and the like) that are coupled to the electronic controller 420 and alloyv a user to interact yvith and control the wearable device 400. In some embodiments, the input 490 includes an interactive graphical user interface (GUI) element that enables user interaction with the wearable device 400.
[00101] In some embodiments, the wearable device 400 may include one or more outputs 492 that are also coupled to the electronic controller 420. The output(s) 492 can receive control signals from the electronic controller 420 to present data or information to a user in response, or to generate other visual, audio, or other outputs. As one example, the output(s) 492 can generate a visual signal to convey information regarding the physiological sensor data, other health data, generated classified feature data, or the like. The output(s) 492 may include, for example, LEDs or a display screen and may generate various signals indicative of, for example, physiological sensor data, other health data, generated classified feature data, or the like. For example, the output(s) 492 may indicate the detection of a congestive heart failure based on classified feature data generated by inputting ECG data measured by the physiological sensors 472 to a trained neural network or other machine learning model.
[00102] FIG. 5 illustrates an example of a system 500 for generating classified feature data indicative of congestive heart failure from ECG data in accordance with some embodiments described in the present disclosure. As shown in FIG. 5, a computing device 550 can receive one or more types of data (e.g.. ECG data, other physiological sensor data, other health data, training data) from data source 502, which may be a wearable device (e.g., wearable device 400). In some embodiments, computing device 550 can execute at least a portion of a congestive heart failure detection system 504 to detect the presence or quantify the likelihood of congestive heart failure from ECG data received from the data source 502.
[00103] Additionally or alternatively, in some embodiments, the computing device 550 can communicate information about data received from the data source 502 to a server 552 over a communication network 554, which can execute at least a portion of the congestive heart failure detection system 504. In such embodiments, the server 552 can return information to the computing device 550 (and/or any other suitable computing device) indicative of an output of the congestive heart failure detection system 504.
[00104] In some embodiments, computing device 550 and/or server 552 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual machine being executed by a physical computing device, and so on. The computing device 550 may be for example an external device 410 and the server 552 may be the server 412 described above.
[00105] In some embodiments, data source 502 can be any suitable source of physiological sensor data (e.g., ECG data, PPG data), such as an ECG system or a wearable device (e.g., wearable device 400), another computing device (e.g., a server storing physiological sensor data), and so on. The data source 502 may additionally or alternatively include a source of other data, such as patient health data. In some embodiments, data source 502 can be local to computing device 550. For example, data source 502 can be incorporated with computing device 550 (e.g., computing device 550 can be configured as part of a device for capturing, scanning, and/or storing data). As another example, data source 502 can be connected to computing device 550 by a cable, a direct wireless link, and so on. Additionally or alternatively, in some embodiments, data source 502 can be located locally and/or remotely from computing device 550, and can communicate data to computing device 550 (and/or server 552) via a communication network (e.g., communication network 554).
[00106] In some embodiments, communication network 554 can be any suitable communication network or combination of communication networks. For example, communication network 554 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network, and so on. In some embodiments, communication network 554 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks. Communications links shown in FIG. 5 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on.
[00107] Referring now to FIG. 6, an example of hardware 600 that can be used to implement data source 502, computing device 550, and server 552 in accordance with some embodiments of the systems and methods described in the present disclosure is shown. As shown in FIG. 6, in some embodiments, computing device 550 can include a processor 602, a display 604, one or more inputs 606, one or more communication systems 608, and/or memory 610. In some embodiments, processor 602 can be any suitable hardware processor or combination of processors, such as a central processing unit (CPU), a graphics processing unit (GPU), and so on. In some embodiments, display 604 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 606 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[00108] In some embodiments, communications systems 608 can include any suitable hardware, firmware, and/or software for communicating information over communication network 554 and/or any other suitable communication networks. For example, communications systems 608 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 608 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[00109] In some embodiments, memory' 610 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 602 to present content using display 604. to communicate with server 552 via communications system(s) 608, and so on. Memory 610 can include any suitable volatile memory, non-volatile memory . storage, or any suitable combination thereof. For example, memory 610 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 610 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 550. In such embodiments, processor 602 can execute at least a portion of the computer program to present content (e.g., images, user interfaces, graphics, tables), receive content from server 552, transmit information to server 552. and so on. For example, the processor 602 and the memory 610 can be configured to perform the methods described herein (e.g., the method of FIG. 1, the method of FIG. 2).
[00110] In some embodiments, server 552 can include a processor 612, a display 614, one or more inputs 616, one or more communications systems 618, and/or memory7 620. In some embodiments, processor 612 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, display 614 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, and so on. In some embodiments, inputs 616 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
[00111] In some embodiments, communications systems 618 can include any suitable hardware, firmware, and/or software for communicating information over communication network 554 and/or any other suitable communication networks. For example, communications systems 618 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 618 can include hardware, firmware and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[00112] In some embodiments, memory7 620 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 612 to present content using display 614, to communicate with one or more computing devices 550, and so on. Memory 620 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory7 620 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 620 can have encoded thereon a server program for controlling operation of server 552. In such embodiments, processor 612 can execute at least a portion of the server program to transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 550, receive information and/or content from one or more computing devices 550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone), and so on.
[00113] In some embodiments, the server 552 is configured to perform the methods described in the present disclosure. For example, the processor 612 and memory' 620 can be configured to perform the methods described herein (e.g., the method of FIG. 1, the method of FIG. 2).
[00114] In some embodiments, data source 502 can include a processor 622, one or more inputs 624, one or more communications systems 626, and/or memory 628. In some embodiments, processor 622 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on. In some embodiments, the one or more inputs 624 are generally configured to acquire data, and can include electrodes, leads, other conductive elements configured to measure electrophysiology signals, and the like. Additionally or alternatively, in some embodiments, one or more inputs 624 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of a physiological sensor. In some embodiments, one or more portions of the one or more inputs 624 can be removable and/or replaceable.
[00115] Note that, although not shown, data source 502 can include any suitable inputs and/or outputs. For example, data source 502 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on. As another example, data source 502 can include any suitable display devices, such as a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
[00116] In some embodiments, communications systems 626 can include any suitable hardware, firmware, and/or software for communicating information to computing device 550 (and. in some embodiments, over communication network 554 and/or any other suitable communication networks). For example, communications systems 626 can include one or more transceivers, one or more communication chips and/or chip sets, and so on. In a more particular example, communications systems 626 can include hardware, firmware and/or software that can be used to establish a wired connection using any suitable port and/or communication standard (e.g.. VGA. DVI video. USB, RS-232, etc.). Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
[00117] In some embodiments, memory 628 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 622 to control the one or more image acquisition systems 624. and/or receive data from the one or more image acquisition systems 624; to images from data; present content (e.g., images, a user interface) using a display; communicate with one or more computing devices 550; and so on. Memory 628 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof. For example, memory 628 can include RAM, ROM, EEPROM, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on. In some embodiments, memory 628 can have encoded thereon, or otherwise stored therein, a program for controlling operation of data source 502. In such embodiments, processor 622 can execute at least a portion of the program to generate classified feature data, transmit information and/or content (e.g., ECG data, classified feature data, alerts, messages) to one or more computing devices 550, receive information and/or content from one or more computing devices 550, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
[00118] In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks), optical media (e.g., compact discs, digital video discs, Blu-ray discs), semiconductor media (e.g.. random access memory (RAM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM)), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
[00119] The present disclosure has described one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.

Claims

1. A method for generating classified feature data indicative of congestive heart failure in a subject, the method comprising:
(a) accessing electrocardiography (ECG) data using a computer system, wherein the ECG data are acquired from a subject;
(b) accessing a machine learning model with the computer system, wherein the machine learning model has been trained on training data to detect a presence of congestive heart failure based on ECG data;
(c) applying the ECG data to the machine learning model using the computer system, generating output as classified feature data indicative of congestive heart failure in the subject; and
(d) outputting the classified feature data to a user.
2. The method of claim 1, wherein the ECG data include transformed ECG data comprising ECG signals in the ECG data that have been transformed to an N-dimensional dataset, wherein N > 2.
3. The method of claim 2, wherein the transformed ECG data comprise scalogram data.
4. The method of claim 2, wherein the transformed ECG data comprise spectrogram data.
5. The method of claim 2. wherein the machine learning model has been trained on the training data to detect the presence of congestive heart failure based on at least one of the ECG data or the transformed ECG data.
6. The method of any one of claims 1-5, wherein the ECG data are obtained with a 12-lead ECG system.
7. The method of any one of claims 1-5, wherein the ECG data are obtained with fewer than twelve leads.
8. The method of claim 7, wherein the ECG data are obtained with six leads.
9. The method of claim 7. wherein the ECG data are obtained with a single lead.
10. The method of any one of claims 1-5, wherein the ECG data are obtained with a radiofrequency (RF)-based biophysical sensor.
11. The method of any one of claims 1-5, wherein accessing the ECG data comprises acquiring the ECG data with one or more physiological sensors.
12. The method of claim 11, wherein the ECG data are acquired using one or more physiological sensors coupled to a wearable device.
13. The method of claim 11, wherein the ECG data are acquired using one or more physiological sensors not in contact with the subject.
14. The method of any one of claims 1-5, wherein the machine learning model comprises a neural network.
15. The method of any one of claims 1-5, wherein the machine learning model comprises a large language model (LLM).
16. The method of claim 15, wherein the LLM implements a generative pretrained transformer (GPT) architecture.
17. The method of claim 15, wherein the classified feature data include text data comprising a report.
18. The method of claim 17, wherein the report indicates whether the subjects condition is improving, worsening, or staying the same.
19. The method of claim 17, wherein the report indicates recommendations for further testing or treatment of the subj ect.
20. The method of claim 17, wherein the report includes patient education materials that explain a care plan tailored to the subject.
21. The method of claim 17, wherein the report comprises clinic notes.
22. The method of claim 17, wherein the report comprises medical billing codes.
23. The method of any one of claims 1-5, wherein the classified feature data are indicative of the subject having a type of congestive heart failure.
24. The method of claim 23, wherein the type of congestive heart failure comprises one of left-sided congestive heart failure, right-sided congestive heart failure, or biventricular congestive heart failure.
25. The method of claim 24, wherein the classified feature data are further indicative of the subject having a subtype of congestive heart failure.
26. The method of claim 25, wherein the subtype of congestive heart failure comprises one of systolic congestive heart failure, diastolic congestive heart failure, reduced ejection fraction congestive heart failure, or preserved ejection fraction congestive heart failure.
27. The method of any one of claims 1-5, wherein the classified feature data are indicative of a stage of congestive heart failure.
28. The method of any one of claims 1-5, wherein the classified feature data are indicative of underlying causes of congestive heart failure in the subject.
29. The method of claim 28, wherein the underlying causes of congestive heart failure in the subject include at least one of coronary artery disease, hypertension, valvular heart disease, or myocarditis.
30. The method of any one of claims 1-5, wherein the classified feature data comprise a risk score for the subject having congestive heart failure.
31. The method of any one of claims 1-5, wherein the classified feature data indicate a severity of congestive heart failure in the subject.
32. The method of claim 31 , wherein the classified feature data further indicate a quantitative estimate of ejection fraction for the subject.
33. The method of claim 1. further comprising accessing patient health data with the computer system and additionally inputting the patient health data to the machine learning model.
34. The method of claim 33, wherein the patient health data comprise epidemiological data associated with congestive heart failure.
PCT/US2025/026525 2024-04-25 2025-04-25 Detecting congestive heart failure from electrocardiography data using machine learning Pending WO2025227125A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463638837P 2024-04-25 2024-04-25
US63/638,837 2024-04-25

Publications (1)

Publication Number Publication Date
WO2025227125A1 true WO2025227125A1 (en) 2025-10-30

Family

ID=97491078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/026525 Pending WO2025227125A1 (en) 2024-04-25 2025-04-25 Detecting congestive heart failure from electrocardiography data using machine learning

Country Status (1)

Country Link
WO (1) WO2025227125A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160120464A1 (en) * 2004-12-23 2016-05-05 Resmed Limited Discrimination of cheyne-stokes breathing patterns by use of oximetry signals
US20190076044A1 (en) * 2017-09-11 2019-03-14 Heart Test Laboratories, Inc. Time-frequency analysis of electrocardiograms
US20210161480A1 (en) * 2018-03-16 2021-06-03 Zoll Medical Corporation Monitoring physiological status based on bio-vibrational and radio frequency data analysis
US11045271B1 (en) * 2021-02-09 2021-06-29 Bao Q Tran Robotic medical system
US20230343464A1 (en) * 2021-05-28 2023-10-26 Tempus Labs, Inc. ECG-Based Cardiovascular Disease Detection Systems and Related Methods
US11896329B1 (en) * 2023-01-23 2024-02-13 Ix Innovation Llc Robotic arthroscopic surgery with machine learning
US20240363247A1 (en) * 2023-04-28 2024-10-31 Mayo Foundation For Medical Education And Research Method and an apparatus for detecting a level of cardiovascular disease

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160120464A1 (en) * 2004-12-23 2016-05-05 Resmed Limited Discrimination of cheyne-stokes breathing patterns by use of oximetry signals
US20190076044A1 (en) * 2017-09-11 2019-03-14 Heart Test Laboratories, Inc. Time-frequency analysis of electrocardiograms
US20210161480A1 (en) * 2018-03-16 2021-06-03 Zoll Medical Corporation Monitoring physiological status based on bio-vibrational and radio frequency data analysis
US11045271B1 (en) * 2021-02-09 2021-06-29 Bao Q Tran Robotic medical system
US20230343464A1 (en) * 2021-05-28 2023-10-26 Tempus Labs, Inc. ECG-Based Cardiovascular Disease Detection Systems and Related Methods
US11896329B1 (en) * 2023-01-23 2024-02-13 Ix Innovation Llc Robotic arthroscopic surgery with machine learning
US20240363247A1 (en) * 2023-04-28 2024-10-31 Mayo Foundation For Medical Education And Research Method and an apparatus for detecting a level of cardiovascular disease

Similar Documents

Publication Publication Date Title
Hussain et al. Big-ECG: Cardiographic predictive cyber-physical system for stroke management
KR102141617B1 (en) Method, system and non-transitory computer-readable recording medium for estimating arrhythmia by using artificial neural network
Yin et al. A health decision support system for disease diagnosis based on wearable medical sensors and machine learning ensembles
US20230335289A1 (en) Systems and methods for generating health risk assessments
US20200337580A1 (en) Time series data learning and analysis method using artificial intelligence
US12484809B2 (en) Electrocardiogram-based blood glucose level monitoring
US20210353203A1 (en) Diagnostics for detection of ischemic heart disease
US20220175324A1 (en) Computer-based prediction of fetal and maternal outcomes
Forkan et al. A probabilistic model for early prediction of abnormal clinical events using vital sign correlations in home-based monitoring
CA2883218A1 (en) Methods and systems for calculating and using statistical models to predict medical events
CN119442124B (en) A method, apparatus, device, and storage medium for identifying abnormal electrocardiogram (ECG) data based on deep learning.
Orphanidou et al. Machine learning models for multidimensional clinical data
Abirami AI clinical decision support system (AI-CDSS) for cardiovascular diseases
Durga Intelligent support for cardiovascular diagnosis: The AI-CDSS approach
Bhattacharya et al. Remote cardiovascular health monitoring system with auto-diagnosis
Mishra et al. Prediction of heart disease using machine learning
US20230200746A1 (en) System for acquisition and analysis of maternal and/or fetal physiological signals
WO2025227125A1 (en) Detecting congestive heart failure from electrocardiography data using machine learning
WO2025227126A1 (en) Detecting cardiac arrhythmia and myocardial ischemia from electrocardiography data using machine learning
Pramanik et al. Cardiovascular Diseases: Artificial Intelligence Clinical Decision Support System
WO2025174849A1 (en) Detecting neurological conditions from electrocardiography data using machine learning
WO2025054468A1 (en) Detecting influenza and other respiratory infections from electrocardiography data using machine learning
KR20230025955A (en) System for predicting health condition by using asynchronous electrocardiogram
US20240398316A1 (en) Method, system, and non-transitory computer-readable recording medium for estimating arrhythmia using artificial neural networks
US12329463B1 (en) System and method for visualization of vectorcardiograms for ablation procedures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25795346

Country of ref document: EP

Kind code of ref document: A1