[go: up one dir, main page]

WO2025101292A1 - System and method for automated, noninvasive identification of arteriovenous access dysfunction - Google Patents

System and method for automated, noninvasive identification of arteriovenous access dysfunction Download PDF

Info

Publication number
WO2025101292A1
WO2025101292A1 PCT/US2024/050110 US2024050110W WO2025101292A1 WO 2025101292 A1 WO2025101292 A1 WO 2025101292A1 US 2024050110 W US2024050110 W US 2024050110W WO 2025101292 A1 WO2025101292 A1 WO 2025101292A1
Authority
WO
WIPO (PCT)
Prior art keywords
access
dysfunctional
wearable
logic
biosensing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/050110
Other languages
French (fr)
Inventor
Forrest Miller
David Whittaker
Samir Shreim
Francis Honore
Zelalem ENGIDA
Arnold KALMBACH
Philip ZEMAN
Christopher Warren
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alio Inc
Original Assignee
Graftworx Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Graftworx Inc filed Critical Graftworx Inc
Publication of WO2025101292A1 publication Critical patent/WO2025101292A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6843Monitoring or controlling sensor contact pressure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6844Monitoring or controlling distance between sensor and tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6846Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive
    • A61B5/6847Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive mounted on an invasive device
    • A61B5/6848Needles
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • Embodiments of the disclosure relate to the field of wearable biosensing devices. More specifically, one embodiment of the disclosure relates to a biosensing device with one or more sensors positioned to assist in identifying a dysfunctional arteriovenous (AV) access from collected data.
  • AV arteriovenous
  • AV arteriovenous
  • An AV access is a surgically created high flow vessel that lies close enough to the surface and is large enough to withstand repeated cannulation.
  • the AV access is needed for receiving maintenance hemodialysis, and therefore, early detection of access dysfunction is helpful in monitoring patient health.
  • the patient may need to receive dialysis via a central venous catheter for weeks or months until a new AV access is ready.
  • Central venous catheter dialysis exposes the patient to a higher risks of sepsis and heart failure.
  • Physical examination by a trained physician remains the gold standard technique for noninvasive screening of AV accesses to identify whether an AV access is beginning to fail.
  • the standard physical examination to identify the presence of normal AV access characteristics, such as (i) listening for audio associated with blood flow (e.g. high-frequency Son) normally detected via a stethoscope, (ii) pulse, and (iii) thrill as represented by vibrations caused by palpation of a vessel normally checked with the fingertips. Thereafter, a determination is made as to whether any or all of the Son, pulse, and/or thrill is abnormal, resulting in referral for an ultrasound and/or further vascular assessment.
  • blood flow e.g. high-frequency Sonus
  • FIG. 1 is a perspective, anterior-facing view of an exemplary embodiment of a wearable biosensing device deployed within a health monitoring system.
  • FIG. 2 is an exploded view of an exemplary embodiment of the wearable biosensing device of FIG. 1, which includes biosensing logic packaged within a first and second housings.
  • FIG. 3 is an exemplary block diagram of the biosensing logic of FIG. 2.
  • FIG. 4 is an exemplary embodiment of flow classification scheme for AV access operability conducted by the sensor data processing system deployed within the health monitoring system of FIG. 1.
  • FIG. 5 is an exemplary first classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1
  • FIG. 6 is an exemplary second classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1.
  • FIG. 7 is an exemplary third classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1.
  • FIG. 8 is an exemplary fourth classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1.
  • SmartPatch wearable biosensing device
  • Embodiments of the present disclosure generally relate to a wearable biosensing device operating as part of a health monitoring system that enables reliable examination and remote monitoring of an arteriovenous (AV) access.
  • the operations performed by the wearable biosensing device and/or a sensor data processing system are configured to replicate aspects of the standard of care experienced by a physical examination of the AV access.
  • the AV access is a surgical connection made between an artery and a vein, usually created by a vascular specialist.
  • the AV access facilitates more efficient dialysis than a “line” port due to quicker blood flow during a dialysis session.
  • the AV access is typically located in the patient’s arm, however, if necessary, it can be placed in the leg or another part of the human anatomy.
  • the wearable biosensing device features biosensing logic deployed within a housing that is attached to a patient (wearer).
  • the biosensing logic includes an electronics assembly, a power assembly, and a sensing assembly.
  • the sensing assembly may be positioned between the electronics assembly and the power assembly.
  • the sensing assembly includes a substrate (e.g., printed circuit board) with components mounted thereon.
  • the mounted components may include, but are not limited or restricted to different types of sensing components such as (i) one or more audio sensing components (e.g., microphone, etc.), (ii) a plurality of optical sensing components, and/or (iii) one or more motion and position sensing components (e.g., accelerometer), all of which are positioned on the substrate.
  • sensing components such as (i) one or more audio sensing components (e.g., microphone, etc.), (ii) a plurality of optical sensing components, and/or (iii) one or more motion and position sensing components (e.g., accelerometer), all of which are positioned on the substrate.
  • the wearable biosensing device features the electronics assembly communicatively coupled to the sensing assembly, and in particular, the above-identified sensing components.
  • the electronics assembly includes (1) processing logic, (2) communications logic, and (3) non-transitory storage medium configured to store data collected by the sensing components (hereinafter, “sensor data”).
  • sensor data data collected by the sensing components
  • the processing logic may be configured to initiate a transfer of the collected sensor data to a remote data processing system (e.g., sensor data processing system), which conducts analytics on the data to noninvasively identify a dysfunctional AV access.
  • the analytics may be conducted by Son detection logic, pulse detection logic, thrill detection logic, and/or classification detection logic operating within the sensor data processing system, as described below.
  • the “sensing components” may include, but are not limited or restricted to (i) a microelectromechanical systems (MEMS) microphone, (ii) optical sensors such as light-emitting diodes (LEDs) in the visible and near infrared parts of the spectrum, and/or (iii) a three-axis accelerometer.
  • MEMS microelectromechanical systems
  • LEDs light-emitting diodes
  • the sensor data processing system may be implemented with software-based algorithms (described below) such as, for example, signal processing algorithms directed to wavelet analysis, cepstral coefficient analysis, and/or image analysis of microphone spectrograms. Additionally, or in the alternative, the sensor data processing system may be implemented with classification models that generate predictions of several phenomena including the presence or absence of an auditory-based signal such as a periodic auscultatory signal, the utility of an auscultatory signal for clinical assessment, and/or the presence or absence of auscultatory signal features that are known to correlate with specific modes of AV access dysfunction. In addition to audio analysis models, one embodiment of the disclosure may incorporate algorithms that conduct analytics to measure pulse and thrill data associated with the AV access provided over a network.
  • software-based algorithms described below
  • the sensor data processing system may be implemented with classification models that generate predictions of several phenomena including the presence or absence of an auditory-based signal such as a periodic auscultatory signal, the utility of an auscultatory signal for clinical assessment, and/or the presence or absence of auscultatory signal features
  • the wearable biosensing device may feature (1) the sensing components, (2) processing logic, and (3) non-transitory storage medium configured to store data elements that, when utilized by the processing logic, noninvasively identify dysfunctional arteriovenous (AV) accesses using data from the sensing components.
  • the “data elements” may include software-based algorithms such as, for example, signal processing algorithms and/or classification models described above.
  • other algorithms that replicate the other parts of the standard physical examination for AV access health may be deployed as one or more data elements maintained within the non-transitory storage medium of the wearable biosensing device and executed by the processing logic of the wearable biosensing device.
  • the AV access health is handled exclusively by the wearable biosensing device and results of the analytics may be provided to the sensor data processing system for subsequent reporting to the patient, a clinician, or other health care professional.
  • the wearable biosensing device may be situated on the AV access and automatically records sensor data at predetermined times or in response to a triggering event (e.g., manual setting, signaling from a local hub that may be periodic or aperiodic in nature, etc.), in contrast with conventional digital stethoscopes that require training to remotely record audio for clinician review.
  • a triggering event e.g., manual setting, signaling from a local hub that may be periodic or aperiodic in nature, etc.
  • the sensing component stack offers a unique combination of microphone, optical sensors, and three-axis accelerometer, which compared to digital stethoscopes for example, replicates all aspects of the standard of care physical examination of the AV access.
  • digital stethoscopes for example, replicates all aspects of the standard of care physical examination of the AV access.
  • patients are required to subjectively assess pulse and thrill themselves as there is no mechanism for clinicians to collect and transmit for remote review of the objective data related to these portions of the physical exam.
  • the wearable biosensing device deployed within the health monitoring system (described below) provides such a mechanism.
  • the detected audio by the microphone and/or the detected light by the optical sensors corresponds to collected sensor data associated with physiological properties of that vessel and/or the biological fluid propagating therethrough (e.g., flow, fluid composition, etc.).
  • This sensor data may be useful in monitoring the health of a patient, especially dialysis patients, and may be used to generate an alert signifying a detected health event that is being (or could be) experienced by the wearer of the wearable biosensing device.
  • the wearable biosensing device may include other sensors (e.g., accelerometer, optical, bioimpedance, electrocardiography, etc.) to generate additional sensor data, where these sensors are configured to detect/monitor a physiological property and convert the monitored physiological property into an electrical signal, which is subsequently converted to a data representation of the monitored property as indicated by one or more electrical signals.
  • sensors e.g., accelerometer, optical, bioimpedance, electrocardiography, etc.
  • logic are representative of hardware, firmware, and/or software that is configured to perform one or more functions.
  • the logic may include circuitry associated with data processing, data storage, and/or data communications. Examples of such circuitry may include, but are not limited or restricted to a processor, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, sensors, semiconductor memory, and/or combinatorial logic.
  • the logic may include software in the form of one or more software modules (hereinafter, “software module(s)”), which may be configured to support certain functionality upon execution by data processing circuitry.
  • a software module may constitute an executable application, a daemon application, an application programming interface (API), a machine-learning (ML) model or other artificial intelligence-based software, a routine or subroutine, a function, a procedure, an applet, a servlet, source or object code, shared library/dynamic load library, or even one or more instructions.
  • the “software module(s)” may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical, or other form of propagated signals such as carrier waves, infrared signals, or digital signals).
  • a suitable non-transitory storage medium e.g., electrical, optical, acoustical, or other form of propagated signals such as carrier waves, infrared signals, or digital signals.
  • non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non- persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power- backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, a hard disk drive, an optical disc drive, a portable memory device, or cloud-based storage (e.g., AWSTM S3 storage, relational or non-relational database storage, etc.).
  • the logic or component or assembly
  • the logic may be stored in persistent storage.
  • attach and other tenses of the term (e.g., attached, attaching, etc.) may be construed as physically connecting a first component to a second component.
  • interconnect may be construed as a physical or logical communication path between two or more logic units or components.
  • wired interconnects may be provided as electrical wiring, optical fiber, cable, and/or bus trace.
  • the interconnect may be a wireless channel using short range signaling (e.g., BluetoothTM) or longer range signaling (e.g., infrared, radio frequency “RF” or the like), a communication pathway between two software-based interfaces, or the like.
  • short range signaling e.g., BluetoothTM
  • RF radio frequency
  • FIG. 1 a perspective view of an exemplary embodiment of a health monitoring system 10 is shown, where the health monitoring system 10 includes a wearable biosensing device 100, a local hub 130, and a sensor data processing system 140, which is communicatively coupled to a network 170.
  • the local hub 130 operates as an intermediary component to enable the wearable biosensing device 100 to communicate with the sensor data processing system 140.
  • the wearable biosensing device 100 is attached to a patient's arm 110. As shown, the wearable biosensing device 100 is intended to be worn over an arteriovenous (AV) access 120 to monitor operability of the AV access 120 and the vessel 125 (e.g., vein or artery) associated therewith.
  • the wearable biosensing device 100 is configured with components (e.g., microphone) and one or more software modules that collect audio associated with sound emitting from the AV access 120 to monitor fistula Son, sometimes referred to as a vascular murmur.
  • Bruit is a detected sound that operates as a reliable indicator of how well the AV access is functioning.
  • the wearable biosensing device 100 is configured with other components (e.g., optical sensor(s) and/or accelerometer) and one or more software modules, which are configured to determine a heart rate (pulse) of the patient and/or thrill, namely the vibration felt upon palpation of the vessel 125 associated with the AV access 120 where either absence of thrill or presence of certain abnormal thrill may suggest stenosis, either of the underlying vessel 125 or it may be transmitted from another source.
  • other components e.g., optical sensor(s) and/or accelerometer
  • software modules which are configured to determine a heart rate (pulse) of the patient and/or thrill, namely the vibration felt upon palpation of the vessel 125 associated with the AV access 120 where either absence of thrill or presence of certain abnormal thrill may suggest stenosis, either of the underlying vessel 125 or it may be transmitted from another source.
  • the monitoring for a healthy AV access includes analytics associated with (i) Sonar (e.g., a rumbling sound that you can hear); (ii) a blood flow rate within a prescribed range, and (iii) a thrill (e.g., a rumbling sensation that you can feel).
  • the sensor data processing system 140 may send a message to initiate an alert 150.
  • the wearable biosensing device 100 is configured to direct collected information from sensing components installed within the wearable biosensing device 100 (sensor data) to the sensor data processing system 140, which is remotely located from the wearable biosensing device 100.
  • the wearable biosensing device 100 is configured to monitor properties (e.g., characteristics and operability) of the AV access 120 by collecting information associated with the AV access 120 and the biological fluid propagating therethrough (e.g., noise and flow measurements associated with flow, bruit, pulse, or thrill, etc.).
  • the collected information may be used for remote monitoring, where the sensor data processing system 140 is configured to determine whether a health event (caused by inferior operability of the AV access 120) exists, which warrants generation and transmission of an alert 150 to the patient or an individual or system involved with the care of the patient.
  • a health event caused by inferior operability of the AV access 120
  • the remote monitoring may involve transmission via interconnects of the collected information from the wearable biosensing device 100 to a local hub 130.
  • the “local hub” 130 constitutes logic (e.g., a device, an application, etc.) that converts the collected sensor data of a first data representation 160 (hereinafter, “first data representation”) provided in accordance with a first transmission protocol (e.g., BluetoothTM or other short distance (wireless) transmission protocol) into sensor data associated with a second data representation 165 (hereinafter, “second data representation”).
  • the second data representation 165 may be provided via interconnects in accordance with a second transmission protocol (e.g., cellular, WiFiTM, or other long distance (wireless) transmission protocol) and routes the second data representation 165 to the sensor data processing system 140.
  • a second transmission protocol e.g., cellular, WiFiTM, or other long distance (wireless) transmission protocol
  • the sensor data processing system 140 features a processor 180 and a non-transitory storage medium 182, which maintains Son detection logic 184, pulse detection logic 186, thrill detection logic 188, and/or an alert generation logic 190.
  • the bruit detection logic 184 includes one or more software modules operating with the processor 180 to detect Son from the collected sensor data (second data representation) 165, namely sounds generated by turbulent flow of blood in the AV access 120 in which an abnormal or absent sound may identify stenosis (e.g., narrowing of the vessel 125 and partial obstruction of the AV access 120).
  • the pulse detection logic 186 may include one or more software modules operating with the processor 180 to detect pulse rate of the wearer of the biosensing device 100.
  • the thrill detection logic 188 includes one or more software modules operating with the processor 180 based on sensor data collected by one or more motion sensing components 370 (e.g., an accelerometer), which is one of the sensing components that is configured to assist in detecting thrill, which represents a palpable vibration caused by movement of blood through a blood vessel, displacing the vessel and the superior tissue, and which is measured by physical palpation of the blood vessel 125 associated with the AV access 120 or the AV access 120.
  • one or more motion sensing components 370 e.g., an accelerometer
  • the sensor data processing system 140 may include the alert generation logic 190, which generates and sends the alert (notification) 150 upon detecting an occurring (or potential) health event that requires attention by a doctor and/or another specified person (including the patient) responsible for addressing any occurring (or potential) health event.
  • the alert 150 may be sent via network 170 for notification over a monitored website or may be sent from the sensor data processing system 140 as an electronic mail (e-mail) message, a text message, or any other signaling mechanism.
  • the Son detection logic 184, pulse detection logic 186, thrill detection logic 188, and/or alert generation logic 190 may be deployed within the local hub 130 or within the wearable biosensing device 100.
  • Each of the local hub 130 and wearable biosensing device 100 includes processing logic and non-transitory storage medium to retain this logic.
  • the wearable biosensing device 100 includes a first (top) housing 200, packaged biosensing logic 220, a shielding component 240, a second (bottom) housing 250, and an adhesive layer 270.
  • the first housing 200 features multiple (e.g., two or more) lobes 205 formed as part of the first housing 200.
  • the first housing 200 may be made of a flexible, water-impervious material (e.g., a polymer such as silicone, plastic, etc.) through a molding process, where the lobes 205 provide internal chambers for housing the biosensing logic 220.
  • the lobes 205 are positioned in a linear orientation, with a first plurality of lobes (e.g., first, second and third lobes 210-212) interconnected by a second plurality of lobes (e.g., fourth and fifth lobes 213-214).
  • the fourth and fifth lobes 213-214 are configured to house interconnects 222 and 223, which provide electrical connections between an electronics assembly 225, a sensing assembly 230, and a power assembly 235 of the biosensing logic 220.
  • each of the assemblies namely the electronic assembly 225, the sensing assembly 230, and the power assembly 235, are maintained within a protective package 226, 231 and 236, respectively.
  • the electronics assembly 225 housed within the first lobe 210 as shown in FIGS. 2-3, the electronics assembly 225 includes a substrate 300 with processing logic 310, communications logic 320, and a non-transitory storage medium 330 mounted thereon.
  • the non-transitory storage medium 330 maintains sensor data aggregation logic 340, which includes one or more software modules 342 operating with the processing logic 310 and the audio sensing components 375 to collect and aggregate information associated with Son, namely sounds generated by turbulent flow of blood in the AV access 120 of FIG. 1 in which an abnormal sound may identify stenosis (e.g., narrowing of the vessel and partial obstruction of the AV access 120).
  • the sensor data aggregation logic 340 may further include one or more software modules 344 operating with the processing logic 310 and one or more optical sensors 365 to collect and aggregate data associated with a pulse rate of the wearer of the biosensing device 100. Additionally, the sensor data aggregation logic 340 may further include one or more software modules 346 operating with the processing logic 310 and an orientation sensing component 370 (e.g., an accelerometer) that collects and aggregates information associated with thrill, such as measured vibration levels based on measured palpations of the blood vessel 125 associated with the AV access 120 or measured palpations at the AV access 120 of FIG. 1.
  • an orientation sensing component 370 e.g., an accelerometer
  • logic 340 of the electronics assembly 225 is configured to (i) collect on gathered by the sensing assembly 230, (ii) store the sensor data (raw) and/or conduct analytics on the collected sensor data, and (iii) communicate, via a wireless or a wired connection, the sensor data and/or the analytic results to a device (e.g., local hub 130 of FIG. 1) remotely located from the wearable biosensing device 100.
  • the electronics assembly 225 may be adapted to transit the first data representation 160 of the collected sensor data to the local hub 130 as shown in FIG. 1.
  • the sensing assembly 230 is housed within the second lobe 211 of the first housing 200.
  • the sensing assembly 230 includes a substrate 350 and one or more sensors 360 (hereinafter, “sensor(s)”) mounted on a posterior surface 355 of the substrate 350.
  • the sensor(s) 360 may include one or more optical sensors configured to emit light and/or detect reflected or refracted light.
  • the optical sensors 360 may include a plurality of photo-plethysmograph (PPG) sensors 365, where each of the plurality of PPG sensors 365 includes multiple light sourcing elements and multiple light detecting elements.
  • PPG photo-plethysmograph
  • the sensing assembly 230 may be mounted on or positioned proximate to the AV access 120 of FIG. 1.
  • the optical sensors 360 may be used to obtain different measurements of properties of the AV access 120 of FIG. 1 and provide this data to the electronics assembly 225 for transmission and/or analysis.
  • the optical sensors 360 may be arranged in a linear arrangement (as shown) or a circular arrangement with a light sourcing member being positioned centrally and light detecting members distributed radially from the central light sourcing member.
  • the sensing assembly 230 may be configured to include the orientation sensing component 370 such as an accelerometer to measure vibration associated with the thrill and an audio sensing component 375 (e.g., microphone, etc.).
  • the optical sensors 360 are positioned to emit or detect light via the shielding component 240 as described below.
  • the power assembly 235 includes a substrate 380, power management logic 385, and power supply logic 390.
  • the power supply logic 390 is configured to provide power to both the components within the sensing assembly 230 as well as the electronics assembly 225.
  • the power management logic 385 is configured to control the distribution of power (e.g., amount, intermittent release, or duration), including disabling of power when the wearable biosensing device 100 is not installed or detached to the wearer to avoid false data collection.
  • the substrate 350 of the sensing assembly 230 may include hardwired traces (power layers) for routing of power from the power supply assembly 235 to components of the sensing assembly 230 and/or components of the electronics assembly 225.
  • the second housing 250 is configured with a centralized, raised opening 255 that is sized to surround a perimeter of the shielding component 240.
  • a top surface 256 of the raised opening 255 is positioned adjacent to a bottom surface 232 of the protective package 231 for the sensing assembly 230.
  • the raised opening 255 may further include lateral flanges 257, which are sized to reside within complementary lateral recesses 258 within the shielding component 240.
  • the first housing 200 and the second housing 250 substantially encapsulate the protective packages 226 and 236 while providing partial encapsulation of the protective package 231 inclusive of the sensing assembly 230.
  • the adhesive layer 270 is applied to at least a portion of a bottom surface 266 of the second housing 250.
  • the adhesive layer 270 is adapted to attach to a surface of a patient’s skin and remain attached thereto.
  • the adhesive layer 270 may include multiple layers for replacement of the second housing 250 without replacing the packaged biosensing logic 220.
  • the plurality of magnets may be positioned within the second housing 250. These magnets may establish a magnetic coupling to metal fastening elements (e g., metal connection points) positioned under the power assembly 235 and/or electronics assembly 225 and/or positioned at ends of the protective packages 226 and 236. Alternatively, the magnets may be positioned as part of the biosensing logic 220 and accessible to metal fastening elements positioned on the second housing 250.
  • the sensor data processing system 140 is configured to determine whether captured sensor data 400, such as captured audio that may include data recordings containing periodic acoustic signaling provided by the biosensing wearable device, is determinative of a particular flow classification.
  • captured audio 400 undergoes signal processing by the electronics assembly 225 of FIGS. 2-3 to produce a message inclusive of the captured sensor data 400 such as a spectrograph 410 associated with captured audio, which may be provided as part of the second data representation 165 of FIG. 1 to the Son detection logic 184, the pulse detection logic 186, and/or the thrill detection logic 188 of the sensor data processing system 140.
  • the Sonar detection logic 184, the pulse detection logic 186, and/or the thrill detection logic 188 are configured to operate to discern flow classifications associated with AV access. It is contemplated that the captured audio represented by the spectrograph 410 may be included, either in total or in part, with the message 160, namely the first data representation 160 of the collected sensor data.
  • the captured audio associated with the spectrograph 410 may constitute a first spectrograph 420 associated with a first flow classification (Class 1), a second spectrograph 430 associated with a second flow classification (Class 2), a third spectrograph 440 associated with a third flow classification (Class 3), or a fourth spectrograph 450 associated with a fourth flow classification (Class 4).
  • the flow classifications may include, but are not limited or restricted to the following: (1) Class 1 - harsh upstroke with minimal diastolic flow; (2) Class 2 - low amplitude, low frequency; (3) Class 3 - normal flow; and (4) Class 4 - Non-classifiable.
  • class 1 denotes AV access operability concerns caused by vascular stenosis 500 downstream from the wearable biosensing device (SmartPatch) 100.
  • the AV access operability concerns are based on sensor data collected by the wearable biosensing device 100. As measured, data extracted from captured high frequency and high amplitude audio 510, which is illustrated in spectrograph 520 (see class 1 spectrograph 420 of FIG.
  • class 2 denotes AV access operability concerns caused by vascular stenosis 600 upstream from the wearable biosensing device (SmartPatch) 100 of FIGS. 1-3. These AV access operability concerns are based on sensor data collected by the wearable biosensing device 100. As measured, data extracted from captured low frequency and low amplitude audio 610, which is illustrated in spectrograph 620 (see class 2 spectrograph 430 of FIG. 4), is determined by any one or more of the detection logic 184/186/188, in accordance with one or more of the ML models described below, to be downstream vascular stenosis.
  • class 3 denotes healthy AV access operability based on sensor data measured by the wearable biosensing device (SmartPatch) 100 of FIGS. 1-3. As measured, the collected sensor data constitutes audio 700 with consistent amplitude as illustrated in spectrograph 710 (see class 3 spectrograph 440 of FIG. 4).
  • class 4 denotes a condition that, based on artificial intelligence-based (AL-based) analytics conducted on the audio-based sensor data 800 as illustrated in spectrograph 810 (see class 4 spectrograph 450 of FIG. 4), AV access operability cannot be determined through trained, AL based models. This may indicate a complete stenosis of the AV access.
  • AL-based artificial intelligence-based
  • the flow classification conducted by the sensor data processing system 140 of FIG. 1 may be determined by the Son detection logic 184, the pulse detection logic 186, and/or the thrill detection logic 188, each of which may correspond (or collectively may correspond) to one or more machine-learning (ML) models that conduct analytics on at least a feature set obtained from the audio captured by the audio sensing component 375 (e.g., features from audio collected by the microphone).
  • the feature set for the ML model(s) may include sparse time domain features, the seven (7) Hu moments, four (4) wavelet scale energies, and/or beat density.
  • MFCCs Mel Frequency Cepstral Coefficients
  • the MFCCs include a set of twenty (20) coefficients that are calculated from the power spectrum of the audio signal.
  • the MFCCs are calculated using the following operations: (1) conduct a Fourier transform of a monitored audio signal; (2) map the powers of the spectrum obtained above onto the Mel scale, using triangular overlapping windows; (3) take logarithm measurements of the powers at each of the Mel frequencies; (4) conduct discrete cosine transform of the list of Mel log powers.
  • the MFCCs correspond to the amplitudes of the resulting spectrum, and thus, constitute a well-established compressed representation of the perceptual qualities of a sound signal.
  • MFCC 1, 4, 5, 6 and 7 may be included in the feature vector, albeit any combination of MFCC from 1-7 may be included in the feature vector.
  • the Son detection logic 184 and/or the thrill detection logic 188 may also conduct analytics include the Hu moments.
  • the Hu moments are a set of seven (7) statistical moments of an image, which may be based on Mel spectrograms of the audio signal.
  • the input vector takes these moments averaged across the Mel spectrograms of all framed heartbeats in the microphone signal.
  • the Hu moments are calculated using the following operations: (1) frame the heartbeats in the microphone signal; (2) calculate the average Mel spectrogram across all heartbeats; and (3) calculate the Hu moments of the average Mel spectrogram.
  • MFCCs and the Hu moments are related, and thus, such operations ae useful for bruit and/or thrill determinations.
  • the MFCC and Hu moments flow classifications are useful as these classifications provide analytics as to the shape of the sensor data, whether the sensor data is provided from audio components or from physical perturbation of the motion sensing components.
  • Wavelet scale energies are a set of energies that are calculated from the wavelet transform of the audio data captured by the audio sensing component 375 (e.g., microphone, etc.) implemented with the wearable biosensing device 100 of FIG. 3.
  • the wavelet scale energies are calculated using the following operations: (1) define wavelet function and decomposition level; (2) apply wavelet decomposition to the audio data; and (3) extract scale energies from wavelet coefficients. Such operations may be performed by the Son detection logic 184.
  • the wavelet scale energies characterize the AV access sounds at different frequencies, thereby providing additional information about the AV access (fistula) sounds that are not captured by the other features.
  • the ML models described herein use the db4 wavelet and decomposition level of 4.
  • Beat Density is the fraction of the signal within consecutive heartbeats detected by the framer. Conducted by the Son detection logic 184, the pulse detection logic 186 and/or the thrill detection logic 188, the beat density may be calculated using the following operations: (1) convert the audio signal into an envelope, using the wavelet lowpass method described below; and (2) identify heartbeats based on their peak locations, and filter heartbeats where the next peak is not near to one period from the current peak. Samples between the peaks are marked as ‘within a heartbeat;’ and (3) find the fraction of samples that are within a heartbeat in a read, or other window of data.
  • Autocorrelation in this context refers to the correlation of the audio signal with itself at different time lags.
  • Autocorrelation is a fundamental technique used to identify that a repeated cardiac cycle reflected in every sensor modality inclusive of audio sensing components 375 and/or motion sensing components 370 of FIG. 3.
  • the maximum autocorrelation of the audio or its envelope, within the range of lags that may contain the heartbeat fundamental frequency is used as a feature. Conducted by the pulse detection logic 186, the autocorrelation is calculated using the following operations; namely (1) calculating the autocorrelation of the audio signal with itself at different time lags; and (2) calculating the maximum autocorrelation of the audio or its envelope, within the range of lags that may contain the heartbeat fundamental frequency.
  • classification models were assessed for use by the detection logic described above. Although a number of classification models may be utilized, for AV access classification, tree-based models generally performed better than the other models, so Random Forest, Gradient Boosting, and Extra Trees models were selected as the final component classifiers. These models are defined below.
  • Random forest classification is an ensemble learning method that combines the predictions from multiple weak learners-specifically, decision trees-to generate a more accurate final prediction. Additional features may be added to or removed from the input vector as a result of a rigorous, semi-automated feature selection process that is run every time these algorithms are trained or tuned. This process not only aims to identify the features that contain the most predictive information-which can then be analyzed and visualized more thoroughly using the simpler, non-ensemble methods described below-but also remove from analysis any quasi-constant features, those with low mutual information and highly-correlated pseudo- redundant features.
  • Gradient Boosting is an ensemble learning method, so the same principles of feature selection and parameter tuning apply.
  • the model’s input features are subject to the same rigorous, semi-automated feature selection process described above, and hyperparameter tuning is key to avoiding overfitting the model.
  • the number of individual decision trees used to build the random forest needs to be large enough to capture the benefits of using the ensemble method, but too many trees can lead to exceptionally long runtimes.
  • the maximum number of branches-how many “decisions” are made to yield a classification decision-is critical for limiting overfitting; if this number is too large, the model may spuriously interpret randomness in the training data as predictive information.
  • Extra Trees classification model is similar to the Random Forest, so the same principles of feature selection and parameter tuning apply.
  • the main differences are the lack of replacement when building trees and using a random split-rather than the optimal split-of features at each node. These two changes are designed to minimize overfitting compared to the Random Forest.
  • the ensemble prediction model produces a quaternary output corresponding to different classes of audio signal.
  • Classes 1-3 contain a periodic signal, where Class 4 does not.
  • the model makes a prediction on the read and returns a string value indicating whether there is a detectable periodic acoustic signal. This string value is stored in a relational or nonrelational database in the same manner as other quantitative and classification results generated by the biosensing device and sensor data processing system described above.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A health monitoring system features a wearable biosensing device and a sensor data processing system. The biosensing device includes a housing and biosensing logic that comprises an electronics assembly and a sensing assembly configured to monitor operability of an arteriovenous (AV) access located proximate to the sensing assembly. The sensing assembly includes different types of sensing components. Communicatively coupled to the wearable biosensing device, the sensor data processing system is configured to receive the sensor data and conduct analytics on the sensor data using machine learning models to identify whether the AV access is operating at a level that indicates that the AV access is normal, dysfunctional, or becoming dysfunctional.

Description

SYSTEM AND METHOD FOR AUTOMATED, NONINVASIVE IDENTIFICATION OF ARTERIOVENOUS ACCESS DYSFUNCTION
FIELD
[0001] Embodiments of the disclosure relate to the field of wearable biosensing devices. More specifically, one embodiment of the disclosure relates to a biosensing device with one or more sensors positioned to assist in identifying a dysfunctional arteriovenous (AV) access from collected data.
GENERAL BACKGROUND
[0002] The following description includes information that may be useful in understanding the described invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Monitoring of arteriovenous (AV) access is crucial to avoiding hospitalizations and maintaining the health of hemodialysis patients. An AV access is a surgically created high flow vessel that lies close enough to the surface and is large enough to withstand repeated cannulation. The AV access is needed for receiving maintenance hemodialysis, and therefore, early detection of access dysfunction is helpful in monitoring patient health. In response to AV access failure, the patient may need to receive dialysis via a central venous catheter for weeks or months until a new AV access is ready. Central venous catheter dialysis exposes the patient to a higher risks of sepsis and heart failure.
[0004] Physical examination by a trained physician remains the gold standard technique for noninvasive screening of AV accesses to identify whether an AV access is beginning to fail. The standard physical examination to identify the presence of normal AV access characteristics, such as (i) listening for audio associated with blood flow (e.g. high-frequency bruit) normally detected via a stethoscope, (ii) pulse, and (iii) thrill as represented by vibrations caused by palpation of a vessel normally checked with the fingertips. Thereafter, a determination is made as to whether any or all of the bruit, pulse, and/or thrill is abnormal, resulting in referral for an ultrasound and/or further vascular assessment. [0005] Currently, there is no system or method of operation that conducts real-time measurements of AV access operations to enable reliable, remote access surveillance of its health. Such a system or method of operation would enable clinicians to better identify and report potential AV access issues in advance of any physical examinations-which may not be performed with sufficient regularity by a well-trained clinician-or critical AV access failures, where clinicians can instruct patients as to self-examination of their AV accesses prior to patient reporting only when more dire AV access issues occur.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
[0007] FIG. 1 is a perspective, anterior-facing view of an exemplary embodiment of a wearable biosensing device deployed within a health monitoring system.
[0008] FIG. 2 is an exploded view of an exemplary embodiment of the wearable biosensing device of FIG. 1, which includes biosensing logic packaged within a first and second housings.
[0009] FIG. 3 is an exemplary block diagram of the biosensing logic of FIG. 2.
[0010] FIG. 4 is an exemplary embodiment of flow classification scheme for AV access operability conducted by the sensor data processing system deployed within the health monitoring system of FIG. 1.
[0011] FIG. 5 is an exemplary first classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1
[0012] FIG. 6 is an exemplary second classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1.
[0013] FIG. 7 is an exemplary third classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1.
[0014] FIG. 8 is an exemplary fourth classification of AV access operability as measured by the wearable biosensing device (SmartPatch) and computed by the sensor data processing system of FIG. 1. DETAILED DESCRIPTION
[0015] Embodiments of the present disclosure generally relate to a wearable biosensing device operating as part of a health monitoring system that enables reliable examination and remote monitoring of an arteriovenous (AV) access. The operations performed by the wearable biosensing device and/or a sensor data processing system are configured to replicate aspects of the standard of care experienced by a physical examination of the AV access. Herein, the AV access is a surgical connection made between an artery and a vein, usually created by a vascular specialist. The AV access facilitates more efficient dialysis than a “line” port due to quicker blood flow during a dialysis session. Normally, the AV access is typically located in the patient’s arm, however, if necessary, it can be placed in the leg or another part of the human anatomy.
[0016] Herein, according to one embodiment of the disclosure, the wearable biosensing device features biosensing logic deployed within a housing that is attached to a patient (wearer). The biosensing logic includes an electronics assembly, a power assembly, and a sensing assembly. As an illustrative example, the sensing assembly may be positioned between the electronics assembly and the power assembly. The sensing assembly includes a substrate (e.g., printed circuit board) with components mounted thereon. The mounted components may include, but are not limited or restricted to different types of sensing components such as (i) one or more audio sensing components (e.g., microphone, etc.), (ii) a plurality of optical sensing components, and/or (iii) one or more motion and position sensing components (e.g., accelerometer), all of which are positioned on the substrate.
[0017] More specifically, according to one embodiment of the disclosure, the wearable biosensing device features the electronics assembly communicatively coupled to the sensing assembly, and in particular, the above-identified sensing components. The electronics assembly includes (1) processing logic, (2) communications logic, and (3) non-transitory storage medium configured to store data collected by the sensing components (hereinafter, “sensor data”). The processing logic may be configured to initiate a transfer of the collected sensor data to a remote data processing system (e.g., sensor data processing system), which conducts analytics on the data to noninvasively identify a dysfunctional AV access. The analytics may be conducted by bruit detection logic, pulse detection logic, thrill detection logic, and/or classification detection logic operating within the sensor data processing system, as described below. Herein, the “sensing components” may include, but are not limited or restricted to (i) a microelectromechanical systems (MEMS) microphone, (ii) optical sensors such as light-emitting diodes (LEDs) in the visible and near infrared parts of the spectrum, and/or (iii) a three-axis accelerometer.
[0018] Herein, the sensor data processing system may be implemented with software-based algorithms (described below) such as, for example, signal processing algorithms directed to wavelet analysis, cepstral coefficient analysis, and/or image analysis of microphone spectrograms. Additionally, or in the alternative, the sensor data processing system may be implemented with classification models that generate predictions of several phenomena including the presence or absence of an auditory-based signal such as a periodic auscultatory signal, the utility of an auscultatory signal for clinical assessment, and/or the presence or absence of auscultatory signal features that are known to correlate with specific modes of AV access dysfunction. In addition to audio analysis models, one embodiment of the disclosure may incorporate algorithms that conduct analytics to measure pulse and thrill data associated with the AV access provided over a network.
[0019] According to another embodiment of the disclosure, the wearable biosensing device may feature (1) the sensing components, (2) processing logic, and (3) non-transitory storage medium configured to store data elements that, when utilized by the processing logic, noninvasively identify dysfunctional arteriovenous (AV) accesses using data from the sensing components. Herein, according to one embodiment of the disclosure, the “data elements” may include software-based algorithms such as, for example, signal processing algorithms and/or classification models described above. In addition to audio analysis models, other algorithms that replicate the other parts of the standard physical examination for AV access health may be deployed as one or more data elements maintained within the non-transitory storage medium of the wearable biosensing device and executed by the processing logic of the wearable biosensing device. Hence, the AV access health is handled exclusively by the wearable biosensing device and results of the analytics may be provided to the sensor data processing system for subsequent reporting to the patient, a clinician, or other health care professional. [0020] For both of the above-described embodiments, the wearable biosensing device may be situated on the AV access and automatically records sensor data at predetermined times or in response to a triggering event (e.g., manual setting, signaling from a local hub that may be periodic or aperiodic in nature, etc.), in contrast with conventional digital stethoscopes that require training to remotely record audio for clinician review. The sensing component stack offers a unique combination of microphone, optical sensors, and three-axis accelerometer, which compared to digital stethoscopes for example, replicates all aspects of the standard of care physical examination of the AV access. Currently, patients are required to subjectively assess pulse and thrill themselves as there is no mechanism for clinicians to collect and transmit for remote review of the objective data related to these portions of the physical exam. The wearable biosensing device deployed within the health monitoring system (described below) provides such a mechanism.
[0021] According to one embodiment of the disclosure, the detected audio by the microphone and/or the detected light by the optical sensors corresponds to collected sensor data associated with physiological properties of that vessel and/or the biological fluid propagating therethrough (e.g., flow, fluid composition, etc.). This sensor data may be useful in monitoring the health of a patient, especially dialysis patients, and may be used to generate an alert signifying a detected health event that is being (or could be) experienced by the wearer of the wearable biosensing device. The wearable biosensing device may include other sensors (e.g., accelerometer, optical, bioimpedance, electrocardiography, etc.) to generate additional sensor data, where these sensors are configured to detect/monitor a physiological property and convert the monitored physiological property into an electrical signal, which is subsequently converted to a data representation of the monitored property as indicated by one or more electrical signals. The data representation enables remote monitoring, as the first embodiment of the disclosure (wearable data gathering and remote analytics) will be discussed in detail below, although the claimed invention is applicable to any or all embodiments.
I. TERMINOLOGY
[0022] In the following description, certain terminology is used to describe aspects of the invention. The terms “logic,” “component,” and “assembly” are representative of hardware, firmware, and/or software that is configured to perform one or more functions. As hardware, the logic (or component or assembly) may include circuitry associated with data processing, data storage, and/or data communications. Examples of such circuitry may include, but are not limited or restricted to a processor, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, sensors, semiconductor memory, and/or combinatorial logic.
[0023] Alternatively, or in combination with the hardware circuitry described above, the logic (or component or assembly) may include software in the form of one or more software modules (hereinafter, “software module(s)”), which may be configured to support certain functionality upon execution by data processing circuitry. For instance, a software module may constitute an executable application, a daemon application, an application programming interface (API), a machine-learning (ML) model or other artificial intelligence-based software, a routine or subroutine, a function, a procedure, an applet, a servlet, source or object code, shared library/dynamic load library, or even one or more instructions. The “software module(s)” may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical, or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non- persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power- backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, a hard disk drive, an optical disc drive, a portable memory device, or cloud-based storage (e.g., AWS™ S3 storage, relational or non-relational database storage, etc.). As firmware, the logic (or component or assembly) may be stored in persistent storage.
[0024] The term “attach” and other tenses of the term (e.g., attached, attaching, etc.) may be construed as physically connecting a first component to a second component.
[0025] The term “interconnect” may be construed as a physical or logical communication path between two or more logic units or components. For instance, as a physical communication path, wired interconnects may be provided as electrical wiring, optical fiber, cable, and/or bus trace. As a logical communication path, the interconnect may be a wireless channel using short range signaling (e.g., Bluetooth™) or longer range signaling (e.g., infrared, radio frequency “RF” or the like), a communication pathway between two software-based interfaces, or the like.
[0026] Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps, or acts are in some way inherently mutually exclusive.
[0027] As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.
II. DEVICE ATTACHMENT SCHEME
[0028] Referring to FIG. 1, a perspective view of an exemplary embodiment of a health monitoring system 10 is shown, where the health monitoring system 10 includes a wearable biosensing device 100, a local hub 130, and a sensor data processing system 140, which is communicatively coupled to a network 170. The local hub 130 operates as an intermediary component to enable the wearable biosensing device 100 to communicate with the sensor data processing system 140.
[0029] According to this embodiment of the disclosure, the wearable biosensing device 100 is attached to a patient's arm 110. As shown, the wearable biosensing device 100 is intended to be worn over an arteriovenous (AV) access 120 to monitor operability of the AV access 120 and the vessel 125 (e.g., vein or artery) associated therewith. The wearable biosensing device 100 is configured with components (e.g., microphone) and one or more software modules that collect audio associated with sound emitting from the AV access 120 to monitor fistula bruit, sometimes referred to as a vascular murmur. Bruit is a detected sound that operates as a reliable indicator of how well the AV access is functioning. The wearable biosensing device 100 is configured with other components (e.g., optical sensor(s) and/or accelerometer) and one or more software modules, which are configured to determine a heart rate (pulse) of the patient and/or thrill, namely the vibration felt upon palpation of the vessel 125 associated with the AV access 120 where either absence of thrill or presence of certain abnormal thrill may suggest stenosis, either of the underlying vessel 125 or it may be transmitted from another source. [0030] Hence, the monitoring for a healthy AV access includes analytics associated with (i) bruit (e.g., a rumbling sound that you can hear); (ii) a blood flow rate within a prescribed range, and (iii) a thrill (e.g., a rumbling sensation that you can feel). However, upon detection of abnormal or absent bruit, pulse, and/or thrill that may signify an unhealthy AV access, the sensor data processing system 140 may send a message to initiate an alert 150.
[0031] More specifically, the wearable biosensing device 100 is configured to direct collected information from sensing components installed within the wearable biosensing device 100 (sensor data) to the sensor data processing system 140, which is remotely located from the wearable biosensing device 100. For this embodiment, the wearable biosensing device 100 is configured to monitor properties (e.g., characteristics and operability) of the AV access 120 by collecting information associated with the AV access 120 and the biological fluid propagating therethrough (e.g., noise and flow measurements associated with flow, bruit, pulse, or thrill, etc.). The collected information may be used for remote monitoring, where the sensor data processing system 140 is configured to determine whether a health event (caused by inferior operability of the AV access 120) exists, which warrants generation and transmission of an alert 150 to the patient or an individual or system involved with the care of the patient.
[0032] More specifically, the remote monitoring may involve transmission via interconnects of the collected information from the wearable biosensing device 100 to a local hub 130. The “local hub” 130 constitutes logic (e.g., a device, an application, etc.) that converts the collected sensor data of a first data representation 160 (hereinafter, “first data representation”) provided in accordance with a first transmission protocol (e.g., Bluetooth™ or other short distance (wireless) transmission protocol) into sensor data associated with a second data representation 165 (hereinafter, “second data representation”). The second data representation 165 may be provided via interconnects in accordance with a second transmission protocol (e.g., cellular, WiFi™, or other long distance (wireless) transmission protocol) and routes the second data representation 165 to the sensor data processing system 140.
[0033] The sensor data processing system 140 features a processor 180 and a non-transitory storage medium 182, which maintains bruit detection logic 184, pulse detection logic 186, thrill detection logic 188, and/or an alert generation logic 190. The bruit detection logic 184 includes one or more software modules operating with the processor 180 to detect bruit from the collected sensor data (second data representation) 165, namely sounds generated by turbulent flow of blood in the AV access 120 in which an abnormal or absent sound may identify stenosis (e.g., narrowing of the vessel 125 and partial obstruction of the AV access 120). Similarly, the pulse detection logic 186 may include one or more software modules operating with the processor 180 to detect pulse rate of the wearer of the biosensing device 100. The thrill detection logic 188 includes one or more software modules operating with the processor 180 based on sensor data collected by one or more motion sensing components 370 (e.g., an accelerometer), which is one of the sensing components that is configured to assist in detecting thrill, which represents a palpable vibration caused by movement of blood through a blood vessel, displacing the vessel and the superior tissue, and which is measured by physical palpation of the blood vessel 125 associated with the AV access 120 or the AV access 120.
[0034] The sensor data processing system 140 may include the alert generation logic 190, which generates and sends the alert (notification) 150 upon detecting an occurring (or potential) health event that requires attention by a doctor and/or another specified person (including the patient) responsible for addressing any occurring (or potential) health event. The alert 150 may be sent via network 170 for notification over a monitored website or may be sent from the sensor data processing system 140 as an electronic mail (e-mail) message, a text message, or any other signaling mechanism.
[0035] Although not shown, it is contemplated that the bruit detection logic 184, pulse detection logic 186, thrill detection logic 188, and/or alert generation logic 190, in whole or in part, may be deployed within the local hub 130 or within the wearable biosensing device 100. Each of the local hub 130 and wearable biosensing device 100 includes processing logic and non-transitory storage medium to retain this logic.
III. GENERA DEVICE ARCHITECTURE
[0036] Referring to FIG. 2, an exploded view of an exemplary embodiment of the wearable biosensing device 100 is shown. For this embodiment, the wearable biosensing device 100 includes a first (top) housing 200, packaged biosensing logic 220, a shielding component 240, a second (bottom) housing 250, and an adhesive layer 270. The first housing 200 features multiple (e.g., two or more) lobes 205 formed as part of the first housing 200. The first housing 200 may be made of a flexible, water-impervious material (e.g., a polymer such as silicone, plastic, etc.) through a molding process, where the lobes 205 provide internal chambers for housing the biosensing logic 220.
[0037] As an illustrative example, according to one embodiment of the disclosure, the lobes 205 are positioned in a linear orientation, with a first plurality of lobes (e.g., first, second and third lobes 210-212) interconnected by a second plurality of lobes (e.g., fourth and fifth lobes 213-214). The fourth and fifth lobes 213-214 are configured to house interconnects 222 and 223, which provide electrical connections between an electronics assembly 225, a sensing assembly 230, and a power assembly 235 of the biosensing logic 220. As shown, each of the assemblies, namely the electronic assembly 225, the sensing assembly 230, and the power assembly 235, are maintained within a protective package 226, 231 and 236, respectively.
[0038] According to one embodiment of the disclosure, housed within the first lobe 210 as shown in FIGS. 2-3, the electronics assembly 225 includes a substrate 300 with processing logic 310, communications logic 320, and a non-transitory storage medium 330 mounted thereon. As shown, the non-transitory storage medium 330 maintains sensor data aggregation logic 340, which includes one or more software modules 342 operating with the processing logic 310 and the audio sensing components 375 to collect and aggregate information associated with bruit, namely sounds generated by turbulent flow of blood in the AV access 120 of FIG. 1 in which an abnormal sound may identify stenosis (e.g., narrowing of the vessel and partial obstruction of the AV access 120). Similarly, the sensor data aggregation logic 340 may further include one or more software modules 344 operating with the processing logic 310 and one or more optical sensors 365 to collect and aggregate data associated with a pulse rate of the wearer of the biosensing device 100. Additionally, the sensor data aggregation logic 340 may further include one or more software modules 346 operating with the processing logic 310 and an orientation sensing component 370 (e.g., an accelerometer) that collects and aggregates information associated with thrill, such as measured vibration levels based on measured palpations of the blood vessel 125 associated with the AV access 120 or measured palpations at the AV access 120 of FIG. 1.
[0039] Collectively, according to one embodiment of the disclosure, as shown in FIG. 3, logic 340 of the electronics assembly 225 is configured to (i) collect on gathered by the sensing assembly 230, (ii) store the sensor data (raw) and/or conduct analytics on the collected sensor data, and (iii) communicate, via a wireless or a wired connection, the sensor data and/or the analytic results to a device (e.g., local hub 130 of FIG. 1) remotely located from the wearable biosensing device 100. For example, the electronics assembly 225 may be adapted to transit the first data representation 160 of the collected sensor data to the local hub 130 as shown in FIG. 1.
[0040] As further shown in FIGS. 2-3, the sensing assembly 230 is housed within the second lobe 211 of the first housing 200. The sensing assembly 230 includes a substrate 350 and one or more sensors 360 (hereinafter, “sensor(s)”) mounted on a posterior surface 355 of the substrate 350. The sensor(s) 360 may include one or more optical sensors configured to emit light and/or detect reflected or refracted light. The optical sensors 360 may include a plurality of photo-plethysmograph (PPG) sensors 365, where each of the plurality of PPG sensors 365 includes multiple light sourcing elements and multiple light detecting elements.
[0041] Referring still to FIGS. 2-3, coupled to the electronics assembly 225 and the power assembly 235 via interconnects 222 and 223, the sensing assembly 230 may be mounted on or positioned proximate to the AV access 120 of FIG. 1. As a result, the optical sensors 360 may be used to obtain different measurements of properties of the AV access 120 of FIG. 1 and provide this data to the electronics assembly 225 for transmission and/or analysis. The optical sensors 360 may be arranged in a linear arrangement (as shown) or a circular arrangement with a light sourcing member being positioned centrally and light detecting members distributed radially from the central light sourcing member. Besides the optical sensors 360, the sensing assembly 230 may be configured to include the orientation sensing component 370 such as an accelerometer to measure vibration associated with the thrill and an audio sensing component 375 (e.g., microphone, etc.). The optical sensors 360 are positioned to emit or detect light via the shielding component 240 as described below.
[0042] The power assembly 235 includes a substrate 380, power management logic 385, and power supply logic 390. The power supply logic 390 is configured to provide power to both the components within the sensing assembly 230 as well as the electronics assembly 225. The power management logic 385 is configured to control the distribution of power (e.g., amount, intermittent release, or duration), including disabling of power when the wearable biosensing device 100 is not installed or detached to the wearer to avoid false data collection. The substrate 350 of the sensing assembly 230 may include hardwired traces (power layers) for routing of power from the power supply assembly 235 to components of the sensing assembly 230 and/or components of the electronics assembly 225.
[0043] Referring back to FIG. 2, the second housing 250 is configured with a centralized, raised opening 255 that is sized to surround a perimeter of the shielding component 240. Herein, according to one embodiment of the disclosure, a top surface 256 of the raised opening 255 is positioned adjacent to a bottom surface 232 of the protective package 231 for the sensing assembly 230. The raised opening 255 may further include lateral flanges 257, which are sized to reside within complementary lateral recesses 258 within the shielding component 240. As a result, the first housing 200 and the second housing 250 substantially encapsulate the protective packages 226 and 236 while providing partial encapsulation of the protective package 231 inclusive of the sensing assembly 230.
[0044] Additionally, the adhesive layer 270 is applied to at least a portion of a bottom surface 266 of the second housing 250. The adhesive layer 270 is adapted to attach to a surface of a patient’s skin and remain attached thereto. Alternatively, the adhesive layer 270 may include multiple layers for replacement of the second housing 250 without replacing the packaged biosensing logic 220.
[0045] In accordance with another embodiment of the disclosure, in lieu of the fastening elements 238 and 239 in combination with the raised fastening elements 260 and 262, the plurality of magnets (not shown) may be positioned within the second housing 250. These magnets may establish a magnetic coupling to metal fastening elements (e g., metal connection points) positioned under the power assembly 235 and/or electronics assembly 225 and/or positioned at ends of the protective packages 226 and 236. Alternatively, the magnets may be positioned as part of the biosensing logic 220 and accessible to metal fastening elements positioned on the second housing 250.
[0046] Referring to FIGS. 4-8, according to one embodiment of the disclosure, the sensor data processing system 140 is configured to determine whether captured sensor data 400, such as captured audio that may include data recordings containing periodic acoustic signaling provided by the biosensing wearable device, is determinative of a particular flow classification. The captured audio 400 undergoes signal processing by the electronics assembly 225 of FIGS. 2-3 to produce a message inclusive of the captured sensor data 400 such as a spectrograph 410 associated with captured audio, which may be provided as part of the second data representation 165 of FIG. 1 to the bruit detection logic 184, the pulse detection logic 186, and/or the thrill detection logic 188 of the sensor data processing system 140. From the captured sensor data 400, the bruit detection logic 184, the pulse detection logic 186, and/or the thrill detection logic 188 are configured to operate to discern flow classifications associated with AV access. It is contemplated that the captured audio represented by the spectrograph 410 may be included, either in total or in part, with the message 160, namely the first data representation 160 of the collected sensor data.
[0047] As an illustrative example, the captured audio associated with the spectrograph 410 may constitute a first spectrograph 420 associated with a first flow classification (Class 1), a second spectrograph 430 associated with a second flow classification (Class 2), a third spectrograph 440 associated with a third flow classification (Class 3), or a fourth spectrograph 450 associated with a fourth flow classification (Class 4). The flow classifications may include, but are not limited or restricted to the following: (1) Class 1 - harsh upstroke with minimal diastolic flow; (2) Class 2 - low amplitude, low frequency; (3) Class 3 - normal flow; and (4) Class 4 - Non-classifiable.
[0048] As shown in FIG. 5, class 1 denotes AV access operability concerns caused by vascular stenosis 500 downstream from the wearable biosensing device (SmartPatch) 100. The AV access operability concerns are based on sensor data collected by the wearable biosensing device 100. As measured, data extracted from captured high frequency and high amplitude audio 510, which is illustrated in spectrograph 520 (see class 1 spectrograph 420 of FIG. 4), is determined by the bruit detection logic 184, the pulse detection logic 186, and/or the thrill detection logic 188 of the sensor data processing system 140 (or in local hub 130 or within the wearable biosensing device 100), in accordance with one or more of the ML models described below, to be upstream vascular stenosis.
[0049] Referring now to FIG. 6, class 2 denotes AV access operability concerns caused by vascular stenosis 600 upstream from the wearable biosensing device (SmartPatch) 100 of FIGS. 1-3. These AV access operability concerns are based on sensor data collected by the wearable biosensing device 100. As measured, data extracted from captured low frequency and low amplitude audio 610, which is illustrated in spectrograph 620 (see class 2 spectrograph 430 of FIG. 4), is determined by any one or more of the detection logic 184/186/188, in accordance with one or more of the ML models described below, to be downstream vascular stenosis.
[0050] As shown in FIG. 7, class 3 denotes healthy AV access operability based on sensor data measured by the wearable biosensing device (SmartPatch) 100 of FIGS. 1-3. As measured, the collected sensor data constitutes audio 700 with consistent amplitude as illustrated in spectrograph 710 (see class 3 spectrograph 440 of FIG. 4). In contrast, in FIG. 8, class 4 denotes a condition that, based on artificial intelligence-based (AL-based) analytics conducted on the audio-based sensor data 800 as illustrated in spectrograph 810 (see class 4 spectrograph 450 of FIG. 4), AV access operability cannot be determined through trained, AL based models. This may indicate a complete stenosis of the AV access.
[0051] In general terms, the flow classification conducted by the sensor data processing system 140 of FIG. 1 may be determined by the bruit detection logic 184, the pulse detection logic 186, and/or the thrill detection logic 188, each of which may correspond (or collectively may correspond) to one or more machine-learning (ML) models that conduct analytics on at least a feature set obtained from the audio captured by the audio sensing component 375 (e.g., features from audio collected by the microphone). The feature set for the ML model(s) may include sparse time domain features, the seven (7) Hu moments, four (4) wavelet scale energies, and/or beat density. These features are defined in detail below.
A. MEL FREQUENCY CEPSTRAL COEFFICIENTS
[0052] Mel Frequency Cepstral Coefficients (MFCCs) are a feature set that is commonly used in speech recognition, where the bruit detection logic 184 and/or the thrill detection logic 188 may perform the classification based on the MFCCs. The MFCCs include a set of twenty (20) coefficients that are calculated from the power spectrum of the audio signal. The MFCCs are calculated using the following operations: (1) conduct a Fourier transform of a monitored audio signal; (2) map the powers of the spectrum obtained above onto the Mel scale, using triangular overlapping windows; (3) take logarithm measurements of the powers at each of the Mel frequencies; (4) conduct discrete cosine transform of the list of Mel log powers. [0053] Herein, the MFCCs correspond to the amplitudes of the resulting spectrum, and thus, constitute a well-established compressed representation of the perceptual qualities of a sound signal. As an illustrated example, MFCC 1, 4, 5, 6 and 7 may be included in the feature vector, albeit any combination of MFCC from 1-7 may be included in the feature vector.
B. Hu MOMENTS
[0054] Additionally, or alternatively to performing flow classification based on MFCCs, the bruit detection logic 184 and/or the thrill detection logic 188 may also conduct analytics include the Hu moments. The Hu moments are a set of seven (7) statistical moments of an image, which may be based on Mel spectrograms of the audio signal. The input vector takes these moments averaged across the Mel spectrograms of all framed heartbeats in the microphone signal. The Hu moments are calculated using the following operations: (1) frame the heartbeats in the microphone signal; (2) calculate the average Mel spectrogram across all heartbeats; and (3) calculate the Hu moments of the average Mel spectrogram.
[0055] It is contemplated that the MFCCs and the Hu moments are related, and thus, such operations ae useful for bruit and/or thrill determinations. The MFCC and Hu moments flow classifications are useful as these classifications provide analytics as to the shape of the sensor data, whether the sensor data is provided from audio components or from physical perturbation of the motion sensing components.
C. WAVELET SCALE ENERGIES
[0056] Wavelet scale energies are a set of energies that are calculated from the wavelet transform of the audio data captured by the audio sensing component 375 (e.g., microphone, etc.) implemented with the wearable biosensing device 100 of FIG. 3. The wavelet scale energies are calculated using the following operations: (1) define wavelet function and decomposition level; (2) apply wavelet decomposition to the audio data; and (3) extract scale energies from wavelet coefficients. Such operations may be performed by the bruit detection logic 184.
[0057] The wavelet scale energies characterize the AV access sounds at different frequencies, thereby providing additional information about the AV access (fistula) sounds that are not captured by the other features. The ML models described herein use the db4 wavelet and decomposition level of 4.
D. BEAT DENSITY
Beat Density is the fraction of the signal within consecutive heartbeats detected by the framer. Conducted by the bruit detection logic 184, the pulse detection logic 186 and/or the thrill detection logic 188, the beat density may be calculated using the following operations: (1) convert the audio signal into an envelope, using the wavelet lowpass method described below; and (2) identify heartbeats based on their peak locations, and filter heartbeats where the next peak is not near to one period from the current peak. Samples between the peaks are marked as ‘within a heartbeat;’ and (3) find the fraction of samples that are within a heartbeat in a read, or other window of data.
E . AUDIO AND AUDIO ENVELOPE AUTOCORRELATION
[0058] Autocorrelation in this context refers to the correlation of the audio signal with itself at different time lags. Autocorrelation is a fundamental technique used to identify that a repeated cardiac cycle reflected in every sensor modality inclusive of audio sensing components 375 and/or motion sensing components 370 of FIG. 3. The maximum autocorrelation of the audio or its envelope, within the range of lags that may contain the heartbeat fundamental frequency, is used as a feature. Conducted by the pulse detection logic 186, the autocorrelation is calculated using the following operations; namely (1) calculating the autocorrelation of the audio signal with itself at different time lags; and (2) calculating the maximum autocorrelation of the audio or its envelope, within the range of lags that may contain the heartbeat fundamental frequency.
F . MODELS FOR RECEIPT OF FEATURES VIA INPUT VECTORS
[0059] During algorithm development, classification models were assessed for use by the detection logic described above. Although a number of classification models may be utilized, for AV access classification, tree-based models generally performed better than the other models, so Random Forest, Gradient Boosting, and Extra Trees models were selected as the final component classifiers. These models are defined below.
1. Random Forest [0060] Random forest classification is an ensemble learning method that combines the predictions from multiple weak learners-specifically, decision trees-to generate a more accurate final prediction. Additional features may be added to or removed from the input vector as a result of a rigorous, semi-automated feature selection process that is run every time these algorithms are trained or tuned. This process not only aims to identify the features that contain the most predictive information-which can then be analyzed and visualized more thoroughly using the simpler, non-ensemble methods described below-but also remove from analysis any quasi-constant features, those with low mutual information and highly-correlated pseudo- redundant features. The primary benefits of this are twofold: first, reducing features lowers the risk of overfitting more directly than other model hyperparameters (which still must be optimized and analyzed to ensure that overfitting is not occurring). Second, a simpler model is easier to parse for meaning beyond “black box” predictions. A random forest model with only a few features can be at least approximated in a closed form solution-or at least a simpler machine learning model-whereas a model with tens of features cannot readily be interpreted by humans.
[0061] There are several important RFC hyperparameters to tune for optimum performance without overfitting the model. The number of individual decision trees used to build the random forest needs to be large enough to capture the benefits of using the ensemble method, but too many trees can lead to exceptionally long runtimes. The minimum number of samples required to split a node-create two new “leaves”-is critical for limiting overfitting; if this number is too small, the model may spuriously interpret randomness in the training data as predictive information. The same is true for the maximum number of branches: how many “decisions” are made to yield a classification decision. In addition to modifying the input vector for the RFC models based on feature selection, these hyperparameters will be tuned as part of any re-training or optimization effort triggered by additional data.
2. Gradient Boosting
[0062] Like the random forest classifier model, Gradient Boosting is an ensemble learning method, so the same principles of feature selection and parameter tuning apply. The model’s input features are subject to the same rigorous, semi-automated feature selection process described above, and hyperparameter tuning is key to avoiding overfitting the model. The number of individual decision trees used to build the random forest needs to be large enough to capture the benefits of using the ensemble method, but too many trees can lead to exceptionally long runtimes. The maximum number of branches-how many “decisions” are made to yield a classification decision-is critical for limiting overfitting; if this number is too large, the model may spuriously interpret randomness in the training data as predictive information. The same is true for the model’ s learning rate: how much relative weight is placed on each additional “boosting” tree.
3. Extra Trees
[0063] The Extra Trees classification model is similar to the Random Forest, so the same principles of feature selection and parameter tuning apply. The main differences are the lack of replacement when building trees and using a random split-rather than the optimal split-of features at each node. These two changes are designed to minimize overfitting compared to the Random Forest.
[0064] The ensemble prediction model produces a quaternary output corresponding to different classes of audio signal. Classes 1-3 contain a periodic signal, where Class 4 does not. The model makes a prediction on the read and returns a string value indicating whether there is a detectable periodic acoustic signal. This string value is stored in a relational or nonrelational database in the same manner as other quantitative and classification results generated by the biosensing device and sensor data processing system described above.
[0065] In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims

CLAIMS What is claimed is:
1. A wearable biosensing device, comprising: a biosensing logic including an electronics assembly and a sensing assembly that are configured to monitor operability of an arteriovenous (AV) access located proximate to the sensing assembly; and wherein the sensing assembly comprises (i) one or more audio sensing components, (ii) a plurality of optical sensing components, and (iii) one or more motion sensing components, and wherein the electronic assembly includes processing logic and communications logic configured to aggregate sensor data from different types of sensing components and route to a remote system to conduct analytics on the sensor data using machine learning models to identify whether the AV access is operating at a level that indicates that the AV access is dysfunctional or becoming dysfunctional.
2. The wearable biosensing device of claim 1, comprising a first housing at least partially encapsulating the biosensing logic and including a plurality of lobes, the electronics assembly positioned with a first lobe of the plurality of lobes and the sensing assembly is positioned within a second lobe of the plurality of lobes.
3. The wearable biosensing device of claim 1, wherein the sensing components include (i) the one or more audio sensing components corresponding to at least a microelectromechanical systems (MEMS) microphone, (ii) the plurality of optical sensing components corresponding to a plurality of optical sensors including at least a first lightemitting diode (LED) in a visible portion of a light spectrum and a second LED in an infrared portion of the light spectrum, and (iii) the one or more motion sensing components corresponding to at least a three-axis accelerometer.
4. The wearable biosensing device of claim 1 communicatively coupled to the remote system including a processor and a non-transitory storage medium, wherein the non- transitory storage medium includes bruit detection logic including one or more software modules that, when executed by the processor, detects a high-frequency audio signal associated with audio produced by fluid flow through the AV access and determine based on character of that audio signal whether the AV access is dysfunctional.
5. The wearable biosensing device of claim 4 communicatively coupled to the remote system, wherein the non-transitory storage medium includes pulse detection logic to detect pulse rate and thereby flow rate for a patient associated with the wearable biosensing device and determine whether the AV access is dysfunctional.
6. The wearable biosensing device of claim 4 communicatively coupled to the remote system, wherein the non-transitory storage medium includes thrill detection logic including one or more software modules that, when executed by the processor, determines whether a level and character of vibration measured upon palpation of a blood vessel associated with the AV access or the AV access indicate that the AV access is dysfunctional.
7. The wearable biosensing device of claim 1 communicatively coupled to the remote system to perform analytics to classify operability of the AV access as normal or dysfunctional based on Mel Frequency Cepstral Coefficient computations.
8. The wearable biosensing device of claim 1 communicatively coupled to the remote system to perform analytics to classify operability of the AV access as normal or dysfunctional based on Hu moment computations.
9. The wearable biosensing device of claim 1 communicatively coupled to the remote system to perform analytics to classify operability of the AV access as normal or dysfunctional based on wavelet scale energy computations.
10. The wearable biosensing device of claim 1 communicatively coupled to the remote system to perform analytics to classify operability of the AV access as normal or dysfunctional using one or more machine learning models with input vector(s) comprising a combination of features associated with one or more of Mel Frequency Cepstral Coefficient computations, Hu moment computations or wavelet scale energy computationsand other features derived from time- and frequency-domain signal processing techniques.
11. A heath monitoring system, comprising: a wearable biosensing device including a housing and a biosensing logic partially encapsulated within the housing, the biosensing logic including an electronics assembly and a sensing assembly that are configured to monitor operability an arteriovenous (AV) access located proximate to the sensing assembly, wherein the sensing assembly comprises different types of sensing components including (i) one or more audio sensing components, (ii) a plurality of optical sensing components, and (iii) one or more motion sensing components, and the electronic assembly includes (i) processing logic and communications logic configured to aggregate sensor data from the different types of sensing components; and a remote system communicatively coupled to the wearable biosensing device to receive the sensor data, the remote system is configured to conduct analytics on the sensor data using trained, machine learning models to identify whether the AV access is operating at a level that indicates that the AV access is normal, dysfunctional, or becoming dysfunctional.
12. The heath monitoring system of claim 11, wherein the wearable biosensing device comprises a first housing at least partially encapsulating the biosensing logic and including a plurality of lobes, the electronics assembly positioned with a first lobe of the plurality of lobes and the sensing assembly is positioned within a second lobe of the plurality of lobes.
13. The heath monitoring system of claim 11, wherein the one or more audio sensing components of the sensing assembly of the wearable monitoring system corresponds to at least a microelectromechanical systems (MEMS) microphone, (ii) the plurality of optical sensing components correspond to a plurality of optical sensors including at least a first lightemitting diode (LED) in a visible portion of a light spectrum and a second LED in an infrared portion of the light spectrum, and (iii) the one or more motion sensing components correspond to at least a three-axis accelerometer.
14. The heath monitoring system of claim 11, wherein the remote system includes a processor and a non-transitory storage medium, wherein the non-transitory storage medium includes bruit detection logic including one or more software modules that, when executed by the processor, detects a high-frequency audio signal associated with audio produced by fluid flow through the AV access and determine based on character of that audio signal whether the AV access is dysfunctional.
15. The heath monitoring system of claim 14, wherein the non-transitory storage medium of the remote system includes pulse detection logic to detect pulse rate and thereby flow rate for a patient associated with the wearable biosensing device and determine whether the AV access is dysfunctional.
16. The heath monitoring system of claim 14, wherein the non-transitory storage medium of the remote system includes thrill detection logic including one or more software modules that, when executed by the processor, determines whether a level and character of vibration measured upon palpation of a blood vessel associated with the AV access or the AV access indicate that the AV access is dysfunctional.
17. The heath monitoring system of claim 12, wherein the remote system is configured to perform analytics to classify operability of the AV access as normal or dysfunctional based on Mel Frequency Cepstral Coefficient computations.
18. The heath monitoring system of claim 12, wherein the remote system is configured to perform analytics to classify operability of the AV access as normal or dysfunctional based on Hu moment computations.
19. The heath monitoring system of claim 12, wherein the remote system is configured to perform analytics to classify operability of the AV access as normal or dysfunctional based on wavelet scale energy computations.
20. The heath monitoring system of claim 14, wherein the remote system is configured to perform analytics to classify operability of the AV access as normal or dysfunctional using one or more machine learning models with input vector(s) comprising a combination of features associated with one or more of Mel Frequency Cepstral Coefficient computations, Hu moment computations or wavelet scale energy computationsand other features derived from time- and frequency-domain signal processing techniques.
PCT/US2024/050110 2023-11-09 2024-10-04 System and method for automated, noninvasive identification of arteriovenous access dysfunction Pending WO2025101292A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363597667P 2023-11-09 2023-11-09
US63/597,667 2023-11-09

Publications (1)

Publication Number Publication Date
WO2025101292A1 true WO2025101292A1 (en) 2025-05-15

Family

ID=95696603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/050110 Pending WO2025101292A1 (en) 2023-11-09 2024-10-04 System and method for automated, noninvasive identification of arteriovenous access dysfunction

Country Status (1)

Country Link
WO (1) WO2025101292A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100220900A1 (en) * 2009-03-02 2010-09-02 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Fingerprint sensing device
US20150366516A1 (en) * 2011-09-23 2015-12-24 Nellcor Puritan Bennett Ireland Systems and methods for determining respiration information from a photoplethysmograph
US20160058288A1 (en) * 2014-08-28 2016-03-03 Mela Sciences, Inc. Three dimensional tissue imaging system and method
US20200057937A1 (en) * 2018-08-17 2020-02-20 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
US20210015991A1 (en) * 2018-10-19 2021-01-21 PatenSee Ltd. Systems and methods for monitoring the functionality of a blood vessel
US20220304586A1 (en) * 2016-09-12 2022-09-29 Alio, Inc. Wearable device with multimodal diagnostics
US20230060676A1 (en) * 2021-02-19 2023-03-02 SafeTogether Limited Liability Company Multimodal diagnosis system, method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100220900A1 (en) * 2009-03-02 2010-09-02 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Fingerprint sensing device
US20150366516A1 (en) * 2011-09-23 2015-12-24 Nellcor Puritan Bennett Ireland Systems and methods for determining respiration information from a photoplethysmograph
US20160058288A1 (en) * 2014-08-28 2016-03-03 Mela Sciences, Inc. Three dimensional tissue imaging system and method
US20220304586A1 (en) * 2016-09-12 2022-09-29 Alio, Inc. Wearable device with multimodal diagnostics
US20200057937A1 (en) * 2018-08-17 2020-02-20 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method thereof
US20210015991A1 (en) * 2018-10-19 2021-01-21 PatenSee Ltd. Systems and methods for monitoring the functionality of a blood vessel
US20230060676A1 (en) * 2021-02-19 2023-03-02 SafeTogether Limited Liability Company Multimodal diagnosis system, method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JOCELYN HUDSON: "Wearable device for AV fistula remote monitoring shows promise—but faces commercial challenges", VASCULAR NEWS, 6 May 2021 (2021-05-06), XP093316814, Retrieved from the Internet <URL:https://vascularnews.com/wearable-device-for-av-fistula-remote-monitoring-shows-promise-but-faces-commercial-challenges/> *

Similar Documents

Publication Publication Date Title
Thiyagaraja et al. A novel heart-mobile interface for detection and classification of heart sounds
Leng et al. The electronic stethoscope
KR102363578B1 (en) Infrasonic stethoscope for monitoring physiological processes
RU2449730C2 (en) Multiparameter classification of cardiovascular tones
US7520860B2 (en) Detection of coronary artery disease using an electronic stethoscope
Chakrabarti et al. Phonocardiogram signal analysis-practices, trends and challenges: A critical review
US8690789B2 (en) Categorizing automatically generated physiological data based on industry guidelines
Chowdhury et al. Machine learning in wearable biomedical systems
CN103313662A (en) System, stethoscope and method for indicating risk of coronary artery disease
EP2440139A1 (en) Method and apparatus for recognizing moving anatomical structures using ultrasound
US20140128754A1 (en) Multimodal physiological sensing for wearable devices or mobile devices
US20190110774A1 (en) Wearable health-monitoring devices and methods of making and using the same
Paviglianiti et al. Noninvasive arterial blood pressure estimation using ABPNet and VITAL-ECG
Shokouhmand et al. Diagnosis of peripheral artery disease using backflow abnormalities in proximal recordings of accelerometer contact microphone (ACM)
Gonzalez-Landaeta et al. Estimation of systolic blood pressure by Random Forest using heart sounds and a ballistocardiogram
CN112535467A (en) Physical sign parameter monitoring equipment and method
WO2025101292A1 (en) System and method for automated, noninvasive identification of arteriovenous access dysfunction
Monika et al. Embedded Stethoscope for Real Time Diagnosis of Cardiovascular Diseases
US20230404518A1 (en) Earbud Based Auscultation System and Method Therefor
US20240212855A1 (en) Systems, methods and wearable biosensing devices for use in a diagnostic architecture
Sung et al. Computer-assisted auscultation: patent ductus arteriosus detection based on auditory time–frequency analysis
Kumar et al. Coronary artery disease detection from pcg signals using time domain based automutual information and spectral features
Choi et al. Development of wireless heart sound acquisition system for screening heart valvular disorder
US20210401311A1 (en) System and Method for Leak Correction and Normalization of In-Ear Pressure Measurement for Hemodynamic Monitoring
Koegelenberg Application of laser Doppler vibrocardiography for human heart auscultation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24889353

Country of ref document: EP

Kind code of ref document: A1