[go: up one dir, main page]

WO2025067903A1 - Ultrasound imaging with follow up sweep guidance after blind sweep protocol - Google Patents

Ultrasound imaging with follow up sweep guidance after blind sweep protocol Download PDF

Info

Publication number
WO2025067903A1
WO2025067903A1 PCT/EP2024/075707 EP2024075707W WO2025067903A1 WO 2025067903 A1 WO2025067903 A1 WO 2025067903A1 EP 2024075707 W EP2024075707 W EP 2024075707W WO 2025067903 A1 WO2025067903 A1 WO 2025067903A1
Authority
WO
WIPO (PCT)
Prior art keywords
sweep
processor
ultrasound
blind
anatomical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/075707
Other languages
French (fr)
Inventor
Sean Flannery
Stephen Schmidt
Leili SALEHI
Leila KALANTARI
Tongxi WANG
Shyam Bharat
Jonathan Thomas SUTTON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Publication of WO2025067903A1 publication Critical patent/WO2025067903A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • A61B8/4254Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors mounted on the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0833Clinical applications involving detecting or locating foreign bodies or organic structures
    • A61B8/085Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/58Testing, adjusting or calibrating the diagnostic device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • A61B8/4263Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors not mounted on the probe, e.g. mounted on an external reference frame

Definitions

  • the subject matter described herein relates to devices, systems, and methods for using ultrasound probe position to guide a user to perform an additional sweep after completing a blind sweep protocol for ultrasound imaging.
  • Ultrasound imaging is often used for diagnostic purposes in an office or hospital setting, but may also used in resource-constrained care settings (e.g., homes, accident sites, ambulances, mobile health facilities, etc.) by emergency personnel, home health nurses, midwives, etc., who may lack ultrasound expertise.
  • Blind sweep protocols are becoming increasingly common as a method to simplify the ultrasound image acquisition workflow for minimally trained users. Whereas in a typical guided sweep by a trained sonographer, the user must localize anatomical regions of interest (ROIs) by relying on their interpretation of the image output, the blind sweep protocol simply calls for sweeps along a pre-determined grid, without necessitating image understanding on the part of the user.
  • ROIs anatomical regions of interest
  • a blind sweep protocol is less likely to appropriately capture anatomical regions of interest (ROI; e.g., head standard plane or other pre-defined anatomical plane) when compared to guided sweeps by a trained sonographer.
  • ROI anatomical regions of interest
  • anatomical ROIs may not be fully imaged. This has implications for many important screening measurements.
  • the Hadlock equation estimates depend on accurate biometry measurements of the fetus head standard plane to determine head circumference and biparietal diameter, and the abdominal standard plane to determine abdominal circumference. As a result, it may be more difficult to obtain accurate biometry measurements from blind sweeps.
  • the automated parameter estimations from the blind sweep may be inaccurate or indeterminable.
  • the placenta is found to be low-lying from the original blind sweep, an additional sweep including the cervix might be indicated to rule-in or rule-out placenta previa (PP).
  • PP rule-in or rule-out placenta previa
  • the ultrasound sweep evaluation system uses an inertial measurement unit (IMU) or other motion tracking methods to map the location of each ultrasound image frame, such that the frames can be registered into an anatomical mapping.
  • IMU inertial measurement unit
  • the ultrasound sweep evaluation system supplements blind sweeps with probe tracking, anatomy detection, and registration of the detected anatomy to the detected probe position.
  • the system generates anatomical mapping of the detected anatomy based on the registration, and directs the user to perform follow-up sweeps based on regions of the anatomical mapping that are deemed to be incomplete.
  • Such improved imaging not only increases the confidence of measurements or diagnoses made on the basis of the bind sweep protocol, but also has the potential to improve any downstream image processing used to generate additional metrics (e.g., fetal ultrasound evaluation metrics) derived from the image processing.
  • a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
  • One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
  • One general aspect includes a system, which includes a processor configured for communication with an ultrasound probe, where the processor is configured to: receive a plurality of ultrasound image frames obtained by the ultrasound probe during a blind sweep protocol on a patient body, where the blind sweep protocol may include a plurality of sweeps of the ultrasound probe on the patient body; receive position data representative of a plurality of positions of the ultrasound probe during the blind sweep protocol; detect an anatomical feature of the patient body in the plurality of image frames; perform registration between: the position data; and at least one of the detected anatomical feature or the plurality of ultrasound image frames; generate an anatomical mapping based on the registration; determine if the blind sweep protocol is incomplete based on at least one of the anatomical mapping, the registration, the detected anatomical feature, or the plurality of ultrasound image frames; and if the blind sweep protocol is incomplete, output user guidance associated with an additional sweep on the patient body by the ultrasound probe to obtain an additional plurality of ultrasound image frames, where the additional sweep is different than the plurality of sweeps of the blind sweep protocol.
  • Implementations may include one or more of the following features.
  • the processor is configured to determine at least one of a location, a direction, a length, or a duration for the additional sweep, based the anatomical mapping.
  • the user guidance may include a visual representation of at least one of the location, the direction, the length, or the duration for the additional sweep, and the processor is configured to output the user guidance to a display in communication with the processor.
  • the system may include the display.
  • the user guidance is overlaid on the anatomical mapping.
  • the processor is configured to output the anatomical mapping to a display in communication with the processor.
  • the user guidance may include auditory feedback, where the processor is configured to output the user guidance to a speaker in communication with the processor.
  • the system may include the speaker.
  • the user guidance may include haptic feedback, where the processor is configured to output the haptic feedback to a haptic motor in communication with the processor.
  • the haptic motor is disposed within the ultrasound probe.
  • the system may include the ultrasound probe.
  • the ultrasound probe may include at least one of an accelerometer, a gyroscope, or a magnetometer disposed within the ultrasound probe.
  • the processor is configured to receive the position data from at least one of the accelerometer, the gyroscope, or the magnetometer.
  • the processor is configured to receive the position data from at least one at least one of the camera or the magnetic coil. In some aspects, the processor is configured to detect the anatomical feature using a trained machine learning model. In some aspects, to determine if the blind sweep protocol is complete, the processor is configured to perform at least one of: a determination of a plurality of anatomical features is detected; a determination of whether a first bounding box associated with detection of the anatomical feature occurs at an edge of a respective image frame; a determination of whether a second bounding box associated with the detection of the anatomical feature occurs in an ultrasound image frame obtained at an end of a sweep in the blind sweep protocol; a determination of whether a third bounding box of the detection of the anatomical feature may include a confidence that does not satisfy a first threshold, or a determination of whether a metric derived from image processing of the plurality of ultrasound image frames may include a confidence that does not satisfy a second threshold.
  • the anatomical feature may include an anatomical feature of a fetus inside the patient body.
  • the processor is configured to determine of whether a pre-defined anatomical plane for ultrasound evaluation of the fetus is detected. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • One general aspect includes a method which includes receiving, with a processor in communication with ultrasound probe, a plurality of ultrasound image frames obtained by the ultrasound probe during a blind sweep protocol on a patient body, where the blind sweep protocol may include a plurality of sweeps of the ultrasound probe on the patient body.
  • the method also includes receiving, with the processor, position data representative of a plurality of positions of the ultrasound probe during the blind sweep protocol.
  • the method also includes detecting, with the processor, an anatomical feature of the patient body in the plurality of image frames.
  • the method also includes performing, with the processor, registration between: the position data, and at least one of the detected anatomical feature or the plurality of ultrasound image frames.
  • the method also includes generating, with the processor, an anatomical mapping based on the registration.
  • the method also includes determining, with the processor, if the blind sweep protocol is incomplete based on at least one of the anatomical mapping, the registration, the detected anatomical feature, or the plurality of ultrasound image frames.
  • the method also includes, if the blind sweep protocol is incomplete, outputting, with the processor, user guidance associated with an additional sweep on the patient body by the ultrasound probe to obtain an additional plurality of ultrasound image frames, where the additional sweep is different than the plurality of sweeps of the blind sweep protocol.
  • Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • Implementations may include one or more of the following features.
  • the user guidance may include a visual representation of at least one of a location, a direction, a length, or a duration for the additional sweep, where the visual representation is overlaid on the anatomical mapping.
  • Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
  • Figure l is a schematic, diagrammatic representation of an ultrasound imaging system, according to aspects of the present disclosure.
  • Figure l is a schematic diagram of a processor circuit, according to aspects of the present disclosure.
  • Figure 3 is a schematic, diagrammatic representation of a patient, according to aspects of the present disclosure.
  • Figure 4 is a is a schematic, diagrammatic representation, in block diagram form, of an example ultrasound sweep evaluation system, according to aspects of the present disclosure.
  • Figure 5 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method, according to aspects of the present disclosure.
  • Figure 6 is a sweep progress screen display of an example ultrasound sweep evaluation system, according to aspects of the present disclosure.
  • Figure 7 is a schematic, diagrammatic, perspective view of an imaging probe (e.g., an ultrasound imaging probe) in contact with a body surface of a patient, according to aspects of the present disclosure.
  • an imaging probe e.g., an ultrasound imaging probe
  • Figure 8 is a schematic, diagrammatic representation, in hybrid flow diagram / block diagram form, of an example ultrasound sweep guidance method, according to aspects of the present disclosure.
  • Figure 9 is a schematic, diagrammatic illustration, in block diagram form, of an anatomy detection subsystem, according to aspects of the present disclosure.
  • Figure 10 is a schematic, diagrammatic representation of an ultrasound video 1010 made up of multiple image frames, according to aspects of the present disclosure.
  • Figure 11 is a schematic, diagrammatic representation of a patient, according to aspects of the present disclosure.
  • Figure 12 is a schematic, diagrammatic representation of a patient, according to aspects of the present disclosure.
  • Figure 13 is a screen display of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure.
  • Figure 14 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method, according to aspects of the present disclosure.
  • an ultrasound sweep evaluation system measures the quality of ROI capture following a blind imaging sweep, and provides clear instructions to the user about any sweeps that may need to be repeated, either in the same location or in slightly different locations.
  • the ultrasound sweep evaluation system presents a novel approach to imaging quality control by, in some aspects, using an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • Other aspects instead or in addition, leverage alternative motion tracking methods.
  • An IMU measures probe motion, and these data are leveraged to determine post-sweep acceptance/rej ection and repetition recommendations.
  • the quality control methodology presented herein is intended for use with standalone, mobile, and cart-based ultrasound imaging systems, but can also be applied to other ultrasound systems and medical imaging and/or device guidance generally.
  • the system instructs the user to perform imaging sweeps along a pre-determined grid on the mother’s abdomen (blind sweeps).
  • This protocol is in contrast to the free hand sweeps that would be performed by a trained sonographer, during which the sonographer moves the probe as needed to localize relevant anatomical structures of interest.
  • the motion sensor that is already included in some ultrasound probes may be leveraged to provide post-sweep quality control.
  • a blind sweep protocol is less likely to appropriately capture anatomical regions of interest (ROIs; e.g., head standard plane) when compared to guided sweeps by a trained sonographer.
  • ROIs anatomical regions of interest
  • Al-based analysis which rely on these data for many applications (such as anatomy mapping e.g., to estimate fetal presentation, placenta location estimation and calculation of the maximum vertical pocket of amniotic fluid) may decrease as a result.
  • this invention describes a method to utilize information automatically gleaned from the blind sweep protocol to guide the user to acquire additional sweeps to capture information not present in the original blind sweeps. This is done by leveraging two technologies: inertial measurement unit (IMU) transducer motion tracking, and deep learning-based anatomical mapping. Using these two technologies, the system can identify frames containing relevant anatomical ROIs via anatomy mapping, and determine their locations within the blind sweep grid using IMU tracking data.
  • IMU inertial measurement unit
  • An ROI can be classified as incompletely imaged (e.g., the detected anatomical feature within the ROI is incompletely images) if the bounding box associated with a particular anatomy detection is at the edge of the frame, is present in the last frame of the sweep, has a low classification confidence, does not contain specific features such as a standard plane or a measurable femur length, or other methods.
  • the system may then direct the user to perform additional blind sweeps in the identified area.
  • Incomplete imagery can also be characterized by the complete absence of a particular anatomy - in such cases, other detected anatomies can be used to prescribe a starting point for a new sweep in the attempt to find the anatomy in question (e.g., if the femur is not captured in the original blind sweep, detections of the head/spine/abdomen can suggest a narrowed-down search location for the femur).
  • the user may be guided to the start point of the new targeted blind sweep grid by IMU-based feedback, graphical/ auditory /haptic demonstration in the user interface, or a combination thereof.
  • the ultrasound sweep evaluation system described herein mitigates these potential issues in the original blind sweeps, by automatically directing the user to perform follow up additional localized blind sweeps around anatomical ROIs, improving image data quality and thus increasing the value proposition of the ultrasound imaging platform, especially for novice users.
  • the ultrasound sweep evaluation system includes methods and devices to identify gaps in the image information gleaned from the blind sweep protocol. Gaps could refer to unscanned regions, partially covered anatomies, or completely missing anatomies.
  • the system also includes methods and devices to estimate what user actions are needed to overcome these gaps (e.g., decide the boundaries/extent of an additional condensed sweep to capture missing information). Knowing what the gaps are, and where the missing information is expected to be present, can inform the generation of a “custom” re-scan protocol.
  • An example re-scan protocol can be a condensed blind sweep in a specific area of the maternal abdomen (e.g., lower left quadrant, if the head is detected there in the original sweep but the number of detections are judged to be insufficient) to augment the information gleaned from the original set of blind sweeps).
  • the ultrasound sweep evaluation system also includes a user interface (UI) to communicate instructions, and possibly success metrics, to the user via graphics or text on the screen, via auditory or haptic feedback, or by other feedback mechanism(s).
  • UI user interface
  • Anatomy detection includes a method to automatically detect anatomical ROIs within ultrasound images, such as head, spine, femur, etc. (e.g., object detection approaches based on the traditional image processing/computer vision methods or a deep learning detection/classification model). This feature may already be present in some ultrasound imaging systems.
  • Probe position tracking includes integrating acceleration and radial velocity measurements from an inertial measurement unit (IMU, whether internal or externally mounted), and/or other motion tracking or position tracking methods.
  • IMU inertial measurement unit
  • the system can determine the location of each ultrasound image frame, and therefore of the anatomical ROIs detected within the image frames.
  • the position of missing information can then be inferred, so that the user can be guided to perform follow-up sweeps.
  • the probe motion tracking method also allows the user to be guided to the follow-up sweep location.
  • User guidance may be provided through a console, mobile app, etc.
  • the feedback to the user may take the form of audio, visual, and/or haptic feedback to guide the transducer to the start point of the follow-up sweep.
  • Another option (not mutually exclusive) is to provide a graphical illustration of the new scan grid in the user interface.
  • an IMU that includes a magnetometer may be used, to provide even more accurate tracking by providing an invariant orientation reference in the Earth’s magnetic field, or other magnetic fields that may be present.
  • the transducer position information determined by the IMU can be used to map each image frame to a location on the blind sweep grid. Movement and position information may optionally be provided by other mechanisms (e.g., electromagnetic tracking, video-based tracking, etc.).
  • the IMU must be calibrated prior to use.
  • the transducer containing the IMU may for example be placed in a fixed, known orientation, with data being logged for a set time period to obtain the biases along each axis.
  • the calibration time may for example be determined based on the number of samples needed to reliably determine sensor bias, and may thus depend on the sampling rate of the IMU.
  • the biases determined during calibration will thus be known. It is noted that sensor biases of the IMU may be accounted for when processing its measurements.
  • the data streams may also be denoised using various hardware and/or software methods known in the art (e.g., low-pass, high-pass or band-pass filters or filtering methods).
  • the sensor streams are then fused, as described below, and used to compute an X, Y, Z position for the probe in a global coordinate system.
  • each image frame and its related anatomy labels can be mapped to its corresponding location within the blind sweep grid.
  • completeness can be determined in multiple ways. For example, if the bounding box for a feature is at the edge of a frame, or still present in the last frame of a sweep, it can be deduced that it was incompletely imaged. Given that the location of the image frame and the incomplete anatomy within the frame are both known, it can be predicted in which direction the sweep must be shifted to complete the anatomy.
  • Conditional Sweep Guidance If incomplete anatomy is identified, the user is then prompted to perform follow up sweeps by the user interface.
  • the new blind sweep specifications may be relayed to the user in multiple ways as well.
  • the IMU may be employed again here to guide the user with real-time feedback to the desired start point.
  • This feedback can be provided in auditory, visual, or haptic form, or some combination of these.
  • a map of the new scan grid can be shown graphically in the user interface. Feedback may be provided at the conclusion of each sweep, or at the conclusion of the blind sweep protocol.
  • the present disclosure aids substantially in the capture of high-quality ultrasound images by minimally trained users, by detecting the position of the ultrasound probe when each image frame is captured, providing post-sweep feedback to the user, and requiring that the user repeat any sweeps where anatomy of interest is not fully captured.
  • the ultrasound sweep evaluation system disclosed herein provides practical improvements in the quality of the captured images and the time required to capture them. This improved imaging transforms a process that is heavily reliant on professional experience into one that is accurate and repeatable even for minimally trained personnel, without the normally routine need to train clinicians such as emergency department personnel to perform quality control on the ultrasound images they capture.
  • This unconventional approach improves the functioning of the ultrasound imaging system, by providing reliable, repeatable imaging in hospital, office, vehicle, field, and home settings.
  • the ultrasound sweep evaluation system may be implemented as a process at least partially viewable on a display, and operated by a control process executing on a processor that accepts user inputs from a keyboard, mouse, or touchscreen interface, and that is in communication with one or more sensor probes.
  • the control process performs certain specific operations in response to different inputs or selections made at different times.
  • FIG. l is a schematic, diagrammatic representation of an ultrasound imaging system 100, according to aspects of the present disclosure.
  • the ultrasound imaging system 100 may for example be used to acquire ultrasound video sweeps, which can then be analyzed by a human clinician or an artificial intelligence to diagnose medical conditions.
  • the ultrasound imaging system 100 is used for scanning an area or volume of a subject’s body.
  • a subject may include a patient of an ultrasound imaging procedure, or any other person, or any suitable living or non-living organism or structure.
  • the ultrasound imaging system 100 includes an ultrasound imaging probe 110 in communication with a host 130 over a communication interface or link 120.
  • the probe 110 may include a transducer array 112, a beamformer 114, a processor circuit 116, and a communication interface 118.
  • the host 130 may include a display 132, a processor circuit 134, a communication interface 136, and a memory 138 storing subject information.
  • the probe 110 is an external ultrasound imaging device including a housing 111 configured for handheld operation by a user.
  • the transducer array 112 can be configured to obtain ultrasound data while the user grasps the housing 111 of the probe 110 such that the transducer array 112 is positioned adjacent to or in contact with a subject’s skin.
  • the probe 110 is configured to obtain ultrasound data of anatomy within the subject’s body while the probe 110 is positioned outside of the subject’s body for general imaging, such as for abdomen imaging, liver imaging, etc.
  • the probe 110 can be an external ultrasound probe, a transthoracic probe, and/or a curved array probe.
  • the probe 110 can be an internal ultrasound imaging device and may comprise a housing 111 configured to be positioned within a lumen of a subject’s body for general imaging, such as for abdomen imaging, liver imaging, etc..
  • the probe 110 may be a curved array probe.
  • Probe 110 may be of any suitable form for any suitable ultrasound imaging application including both external and internal ultrasound imaging.
  • the ultrasound probe 110 may include an inertial measurement unit (IMU) 115, which may in some cases include an accelerometer 140 (e.g., a 1-axis, 2-axis, or 3-axis accelerometer), a gyroscope 150 (e.g., a 1-axis, 2-axis, or 3-axis gyroscope), and/or a magnetometer 160 (e.g., a 1-axis, 2-axis, or 3-axis magnetometer).
  • the probe 110 also includes a haptic motor 170 capable of, for example, vibrating the probe 110 to provide haptic feedback to a user holding the probe 110.
  • the haptic motor 170 may be located outside of the probe 110, such as for example in a handheld host 130 (e.g., a smartphone or tablet computer).
  • aspects of the present disclosure can be implemented with medical images of subjects obtained using any suitable medical imaging device and/or modality.
  • medical images and medical imaging devices include x-ray images (angiographic images, fluoroscopic images, images with or without contrast) obtained by an x-ray imaging device, computed tomography (CT) images obtained by a CT imaging device, positron emission tomography-computed tomography (PET-CT) images obtained by a PET- CT imaging device, magnetic resonance images (MRI) obtained by an MRI device, singlephoton emission computed tomography (SPECT) images obtained by a SPECT imaging device, optical coherence tomography (OCT) images obtained by an OCT imaging device, and intravascular photoacoustic (IVPA) images obtained by an IVPA imaging device.
  • CT computed tomography
  • PET-CT positron emission tomography-computed tomography
  • MRI magnetic resonance images
  • SPECT singlephoton emission computed tomography
  • OCT optical coherence tomography
  • the transducer array 112 emits ultrasound signals towards an anatomical object 105 of a subject and receives echo signals reflected from the object 105 back to the transducer array 112.
  • the ultrasound transducer array 112 can include any suitable number of acoustic elements, including one or more acoustic elements and/or a plurality of acoustic elements.
  • the transducer array 112 includes a single acoustic element.
  • the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration.
  • the transducer array 112 can include between 1 acoustic element and 10000 acoustic elements, including values such as 2 acoustic elements, 4 acoustic elements, 36 acoustic elements, 64 acoustic elements, 128 acoustic elements, 500 acoustic elements, 812 acoustic elements, 1000 acoustic elements, 3000 acoustic elements, 8000 acoustic elements, and/or other values both larger and smaller.
  • the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration, such as a linear array, a planar array, a curved array, a curvilinear array, a circumferential array, an annular array, a phased array, a matrix array, a one-dimensional (ID) array, a 1.x dimensional array (e.g., a 1.5D array), or a two- dimensional (2D) array.
  • the array of acoustic elements e.g., one or more rows, one or more columns, and/or one or more orientations
  • the transducer array 112 can be configured to obtain one-dimensional, two- dimensional, and/or three-dimensional images of a subject’s anatomy.
  • the transducer array 112 may include a piezoelectric micromachined ultrasound transducer (PMUT), capacitive micromachined ultrasonic transducer (CMUT), single crystal, lead zirconate titanate (PZT), PZT composite, other suitable transducer types, and/or combinations thereof.
  • PMUT piezoelectric micromachined ultrasound transducer
  • CMUT capacitive micromachined ultrasonic transducer
  • PZT lead zirconate titanate
  • PZT composite other suitable transducer types, and/or combinations thereof.
  • the object 105 may include any anatomy or anatomical feature, such as a kidney, liver, and/or any other anatomy of a subject.
  • the present disclosure can be implemented in the context of any number of anatomical locations and tissue types, including without limitation, organs including the liver, kidneys, gall bladder, pancreas, lungs; ducts; intestines; nervous system structures including the brain, dural sac, spinal cord and peripheral nerves; the urinary tract; as well as valves within the blood vessels, blood, abdominal organs, and/or other systems of the body.
  • the object 105 may include malignancies such as tumors, cysts, lesions, hemorrhages, or blood pools within any part of human anatomy.
  • the anatomy may be a blood vessel, as an artery or a vein of a subject’s vascular system, including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or any other suitable lumen inside the body.
  • vascular system including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or any other suitable lumen inside the body.
  • the present disclosure can be implemented in the context of man-made structures such as, but without limitation, heart valves, stents, shunts, filters, implants and other devices.
  • the beamformer 114 is coupled to the transducer array 112.
  • the beamformer 114 controls the transducer array 112, for example, for transmission of the ultrasound signals and reception of the ultrasound echo signals.
  • the beamformer 114 may apply a time-delay to signals sent to individual acoustic transducers within an array in the transducer 112 such that an acoustic signal is steered in any suitable direction propagating away from the probe 110.
  • the beamformer 114 may further provide image signals to the processor circuit 116 based on the response of the received ultrasound echo signals.
  • the beamformer 114 may include multiple stages of beamforming. The beamforming can reduce the number of signal lines for coupling to the processor circuit 116.
  • the transducer array 112 in combination with the beamformer 114 may be referred to as an ultrasound imaging component.
  • the processor 116 is coupled to the beamformer 114.
  • the processor 116 is configured for communication with the ultrasound probe, and is configured to output the user guidance and other outputs described herein.
  • the processor 116 may also be described as a processor circuit, which can include other components in communication with the processor 116, such as a memory, beamformer 114, communication interface 118, and/or other suitable components.
  • the processor 116 may include a central processing unit (CPU), a graphical processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a controller, a field programmable gate array (FPGA) device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
  • CPU central processing unit
  • GPU graphical processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the processor 116 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the processor 116 is configured to process the beamformed image signals. For example, the processor 116 may perform filtering and/or quadrature demodulation to condition the image signals.
  • the processor 116 and/or 134 can be configured to control the array 112 to obtain ultrasound data associated with the object 105.
  • the communication interface 118 is coupled to the processor 116.
  • the communication interface 118 may include one or more transmitters, one or more receivers, one or more transceivers, and/or circuitry for transmitting and/or receiving communication signals.
  • the communication interface 118 can include hardware components and/or software components implementing a particular communication protocol suitable for transporting signals over the communication link 120 to the host 130.
  • the communication interface 118 can be referred to as a communication device or a communication interface module.
  • the communication link 120 may be any suitable communication link.
  • the communication link 120 may be a wired link, such as a universal serial bus (USB) link or an Ethernet link.
  • the communication link 120 may be a wireless link, such as an ultra-wideband (UWB) link, an Institute of Electrical and Electronics Engineers (IEEE) 802.11 WiFi link, or a Bluetooth link.
  • UWB ultra-wideband
  • IEEE Institute of Electrical and Electronics Engineers
  • the communication interface 136 may receive the image signals.
  • the communication interface 136 may be substantially similar to the communication interface 118.
  • the host 130 may be any suitable computing and display device, such as a workstation, a personal computer (PC), a laptop, a tablet, or a mobile phone.
  • the processor 134 is coupled to the communication interface 136.
  • the processor 134 may also be described as a processor circuit, which can include other components in communication with the processor 134, such as the memory 138, the communication interface 136, an optional speaker 139, and/or other suitable components.
  • the processor 134 may be implemented as a combination of software components and hardware components.
  • the processor 134 may include a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a controller, an FPGA device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
  • the processor 134 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the processor 134 can be configured to generate image data from the image signals received from the probe 110.
  • the processor 134 can apply advanced signal processing and/or image processing techniques to the image signals.
  • the processor 134 can form a three-dimensional (3D) volume image from the image data.
  • the processor 134 can perform real-time processing on the image data to provide a streaming video of ultrasound images of the object 105.
  • the host 130 includes a beamformer.
  • the processor 134 can be part of and/or otherwise in communication with such a beamformer.
  • the beamformer in the in the host 130 can be a system beamformer or a main beamformer (providing one or more subsequent stages of beamforming), while the beamformer 114 is a probe beamformer or micro-beamformer (providing one or more initial stages of beamforming).
  • the memory 138 is coupled to the processor 134.
  • the memory 138 may be any suitable storage device, such as a cache memory (e.g., a cache memory of the processor 134), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, solid state drives, other forms of volatile and non-volatile memory, or a combination of different types of memory.
  • a cache memory e.g., a cache memory of the processor 134
  • RAM random access memory
  • MRAM magnetoresistive RAM
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM erasable programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • flash memory solid state memory device, hard disk drives, solid state drives, other forms of
  • the memory 138 can be configured to store subject information, measurements, data, or files relating to a subject’s medical history, history of procedures performed, anatomical or biological features, characteristics, or medical conditions associated with a subject, computer readable instructions, such as code, software, or other application, as well as any other suitable information or data.
  • the memory 138 may be located within the host 130.
  • Subject information may include measurements, data, files, other forms of medical history, such as but not limited to ultrasound images, ultrasound videos, and/or any imaging information relating to the subject’s anatomy.
  • the subject information may include parameters related to an imaging procedure such as an anatomical scan window, a probe orientation, and/or the subject position during an imaging procedure.
  • the memory 138 can also be configured to store information related to the training and implementation of machine learning algorithms (e.g., trained neural networks or machine learning models) and/or information related to implementing image recognition algorithms for detecting/segmenting anatomy, image quantification algorithms, and/or image acquisition guidance algorithms, including those described herein.
  • machine learning algorithms e.g., trained neural networks or machine learning models
  • image recognition algorithms for detecting/segmenting anatomy, image quantification algorithms, and/or image acquisition guidance algorithms, including those described herein.
  • the display 132 is coupled to the processor circuit 134.
  • the display 132 may be a monitor or any suitable display.
  • the display 132 is configured to display the ultrasound images, image videos, and/or any imaging information of the object 105.
  • the ultrasound imaging system 100 may be used to assist a sonographer in performing an ultrasound scan.
  • the scan may be performed in a at a point-of-care setting.
  • the host 130 is a console or movable cart.
  • the host 130 may be a mobile device, such as a tablet, a mobile phone, or portable computer.
  • the ultrasound system can acquire an ultrasound image of a particular region of interest within a subject’s anatomy.
  • the ultrasound imaging system 100 may then analyze the ultrasound image to identify various parameters associated with the acquisition of the image such as the scan window, the probe orientation, the subject position, and/or other parameters.
  • the ultrasound imaging system 100 may then store the image and these associated parameters in the memory 138.
  • the ultrasound imaging system 100 may retrieve the previously acquired ultrasound image and associated parameters for display to a user which may be used to guide the user of the ultrasound imaging system 100 to use the same or similar parameters in the subsequent imaging procedure, as will be described in more detail hereafter.
  • the processor 134 may utilize deep learning-based prediction networks to identify parameters of an ultrasound image, including an anatomical scan window, probe orientation, subject position, and/or other parameters. In some aspects, the processor 134 may receive metrics or perform various calculations relating to the region of interest imaged or the subject’s physiological state during an imaging procedure. These metrics and/or calculations may also be displayed to the sonographer or other user via the display 132.
  • the host 130 may also include a speaker 180.
  • the speaker 180 may for example be used to provide warning tones, beeps, or other auditory feedback to the user.
  • FIG l is a schematic diagram of a processor circuit 250, according to aspects of the present disclosure.
  • the processor circuit 250 may be implemented in the ultrasound imaging system 100, or other devices or workstations (e.g., third-party workstations, network routers, etc.), or on a cloud processor or other remote processing unit, as necessary to implement the method.
  • the processor circuit 250 may include a processor 260, a memory 264, and a communication module 268. These elements may be in direct or indirect communication with each other, for example via one or more buses.
  • the processor 260 may include a central processing unit (CPU), a digital signal processor (DSP), an ASIC, a controller, or any combination of general-purpose computing devices, reduced instruction set computing (RISC) devices, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other related logic devices, including mechanical and quantum computers.
  • the processor 260 may also comprise another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
  • the processor 260 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the memory 264 may include a cache memory (e.g., a cache memory of the processor 260), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and nonvolatile memory, or a combination of different types of memory.
  • the memory 264 includes a non-transitory computer-readable medium.
  • the memory 264 may store instructions 266.
  • the instructions 266 may include instructions that, when executed by the processor 260, cause the processor 260 to perform the operations described herein.
  • Instructions 266 may also be referred to as code.
  • the terms “instructions” and “code” should be interpreted broadly to include any type of computer-readable statement(s).
  • the terms “instructions” and “code” may refer to one or more programs, routines, subroutines, functions, procedures, etc.
  • “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements.
  • the communication module 268 can include any electronic circuitry and/or logic circuitry to facilitate direct or indirect communication of data between the processor circuit 250, and other processors or devices.
  • the communication module 268 can be an input/output (I/O) device.
  • the communication module 268 facilitates direct or indirect communication between various elements of the processor circuit 250 and/or the ultrasound imaging system 100.
  • the communication module 268 may communicate within the processor circuit 250 through numerous methods or protocols.
  • Serial communication protocols may include but are not limited to United States Serial Protocol Interface (US SPI), Inter-Integrated Circuit (I 2 C), Recommended Standard 232 (RS- 232), RS-485, Controller Area Network (CAN), Ethernet, Aeronautical Radio, Incorporated 429 (ARINC 429), MODBUS, Military Standard 1553 (MIL-STD-1553), or any other suitable method or protocol.
  • Parallel protocols include but are not limited to Industry Standard Architecture (ISA), Advanced Technology Attachment (ATA), Small Computer System Interface (SCSI), Peripheral Component Interconnect (PCI), Institute of Electrical and Electronics Engineers 488 (IEEE-488), IEEE-1284, and other suitable protocols. Where appropriate, serial and parallel communications may be bridged by a Universal Asynchronous Receiver Transmitter (UART), Universal Synchronous Receiver Transmitter (USART), or other appropriate subsystem.
  • External communication may be accomplished using any suitable wireless or wired communication technology, such as a cable interface such as a universal serial bus (USB), micro USB, Lightning, or FireWire interface, Bluetooth, Wi-Fi, ZigBee, Li-Fi, or cellular data connections such as 2G/GSM (global system for mobiles) , 3G/UMTS (universal mobile telecommunications system), 4G, long term evolution (LTE), WiMax, or 5G.
  • a Bluetooth Low Energy (BLE) radio can be used to establish connectivity with a cloud service, for transmission of data, and for receipt of software patches.
  • BLE Bluetooth Low Energy
  • the controller may be configured to communicate with a remote server, or a local device such as a laptop, tablet, or handheld device, or may include a display capable of showing status variables and other information. Information may also be transferred on physical media such as a USB flash drive or memory stick.
  • FIG. 3 is a schematic, diagrammatic representation of a patient 300, according to aspects of the present disclosure. Visible on the abdomen 310 of the patient 300 is a desired sweep pattern 320 intended to capture images of desired features of the patient’s anatomy.
  • the sweep pattern 320 consists of multiple vertical sweep lines 330 and multiple horizontal sweep lines 340. Each sweep line 330, 340 represents a desired path for one imaging sweep of the abdomen 310.
  • the sweep pattern consists of three vertical sweep lines 330, all in an upward direction with respect to the patient, and three horizontal sweep lines 340, all in a left-right direction with respect to the patient.
  • a sweep pattern 320 may include more or fewer sweep lines 330, including vertical sweep lines 330, horizontal sweep lines 330, or combinations thereof, in any combination of upward, downward, left, or right directions. Furthermore, a sweep pattern 320 may cover other portions of the patient’s body, including but not limited to the head, neck, spine, limbs, etc.
  • FIG 4 is a schematic, diagrammatic representation, in block diagram form, of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure.
  • the ultrasound sweep guidance system, 400 acquires ultrasound images 410 from one or more image sweeps, wherein each sweep may for example contain 30 images per second over a 10-second recording period, for a total of 300 image frames.
  • the ultrasound sweep evaluation system 400 may include an IMU 420 to record the motion of the ultrasound imaging probe. Either or both of the ultrasound images 410 and the motion data from the IMU 420 may be used by a signal processing unit 430 to analyze the motion of the probe and determine whether it is compliant with the desired blind sweep protocol.
  • the signal processing unit may include a sensor fusion algorithm such as a Kalman filter or extended Kalman filter.
  • a Kalman filter uses linear quadratic estimation (LQE) to turn a series of noisy measurements observed over time into a series of accurate estimates for the true values of the variables.
  • LQE linear quadratic estimation
  • the system may include hidden variables, and may reduce a large number of input variables into a smaller number of output variables such that, for example, 6-DOF IMU readings (x, y, and z linear accelerations, and rotation rates around the x, y, and z axes) and 3 -DOF magnetometer readings (x, y, and z magnetic field strengths) may yield a 6-element output vector (x, y, and z positions and angles).
  • the Kalman filter may produce more output variables than there are inputs.
  • the same 6-DOF IMU readings may yield a 15-element output of x, y, and z accelerations, x, y, and z velocities, x, y, and z positions, x, y, and z rotation rates, and x, y, and z angles.
  • An extended Kalman filter is a nonlinear version of a standard (linear) Kalman filter, which linearizes about an estimate of the current mean and covariance, and is frequently used in well-defined transition models such as navigation based on multiple unrelated input variables.
  • the state transition and observation models don't need to be linear functions of the state but may instead be differentiable functions.
  • the EKF may use multivariate Taylor series expansions to linearize a model about a working point at any given time k. If the system model is not well known or is inaccurate, then Monte Carlo methods may be employed within the EKF to generate variance around the input values, such that a mean prediction for each output variable can be extracted.
  • the signal processing unit 430 may employ other types of sensor fusion algorithms, including but not limited to a complementary filter, Madgwick filter, or a learning network.
  • the signal processing unit 430 may implement or include any suitable type of learning network.
  • the signal processing unit 430 could include a neural network, such as a recurrent neural network (RNN) or temporal convolutional network (TCN) that includes temporal information.
  • the neural network may additionally or alternatively be an encoder-decoder type network, or may utilize a backbone architecture based on other types of neural networks, such as an object detection network (if images are also used), classification network, etc.
  • the TCN may for example include a set of N convolutional layers, where N may be any positive integer. Fully connected layers can be omitted when the TCN is a backbone.
  • the TCN may also include max pooling layers and/or activation layers.
  • Each convolutional layer may include a set of filters configured to extract features from an input (e.g., from an axis of the IMU). The value N and the size of the filters may vary depending on the aspects.
  • the convolutional layers may utilize any non-linear activation function, such as for example a leaky rectified linear unit (ReLU) activation function.
  • ReLU leaky rectified linear unit
  • normalization/regularization methods may be used, such as batch normalization.
  • the max pooling layers gradually adjust the dimension of the input variables (e.g., 6-DOF or 9-DOF IMU outputs) to a dimension of the desired result (e.g., a 15-element probe motion vector).
  • Fully connected layers may be referred to as perception or perceptive layers. In some aspects, fully connected layers may be found in the neural network.
  • inputs to the signal processing unit 430 may also include classified ultrasound image features, orientations derived from ultrasound image features, camera images of the probe, camera images from the probe, or positioning system inputs external to the ultrasound probe, including but not limited to global positioning system (GPS) inputs.
  • GPS global positioning system
  • a user interface 440 provides instructions to the user regarding the blind-sweep protocol, as well as feedback on whether (and to what degree) the motions of the probe are compliant with the protocol.
  • Feedback may include real-time feedback 550, including live deviation alerts provided during the sweep, and post-sweep feedback 460, indicating whether that sweep was compliant or non-compliant, and in some cases may also include statistics or other information regarding which aspects of the sweep were non-compliant. In some cases, a non-compliant sweep may be rejected, and the user may be asked to repeat that sweep.
  • the signal processing unit 430 may produce a clinical report 470.
  • the clinical report may include 2D or 3D images, anatomical measurements or statistics, and other clinically relevant information.
  • the clinical report 470 may also include performance statistics on the user who performed the sweeps.
  • the content of the clinical report 470 may vary depending on geographic region, on the type of anatomy being imaged, or on other factors.
  • the clinical report 470 may, in some cases, be available to the user of the ultrasound sweep evaluation system 400, who may for example be a physician. In other cases, the user may be a paramedic, midwife, or other person not specifically trained in the interpretation of ultrasound images. In such cases, the clinical report 470 may be stored locally, may be transmitted over a network for review by a remote physician, or may be passed to a downstream Al tool for further analysis.
  • block diagrams are provided herein for exemplary purposes; a person of ordinary skill in the art will recognize myriad variations that nonetheless fall within the scope of the present disclosure.
  • block diagrams may show a particular arrangement of components, modules, services, steps, processes, or layers, resulting in a particular data flow. It is understood that some aspects of the systems disclosed herein may include additional components, that some components shown may be absent from some aspects, and that the arrangement of components may be different than shown, resulting in different data flows while still performing the methods described herein.
  • Figure 5 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method 500, according to aspects of the present disclosure. It is understood that the steps of method 500 may be performed in a different order than shown in Figure 5, additional steps can be provided before, during, and after the steps, and/or some of the steps described can be replaced or eliminated in other embodiments. One or more of steps of the method 500 can be carried by one or more devices and/or systems described herein, such as components of the ultrasound imaging system 100, ultrasound sweep evaluation system 400, and/or processor circuit 250.
  • the method 500 includes controlling (e.g., with a processor) the ultrasound probe to obtain ultrasound image frames of the patient’s anatomy during the blind weep protocol. Execution then proceeds to step 520.
  • step 520 the method 500 includes performing anatomy detection in the ultrasound image frames, as described below. Execution then proceeds to step 550.
  • step 530 the method 500 includes controlling (e.g., with a processor) the IMU to obtain IMU data at the same time (though not necessarily the same rate) the ultrasound image frames are being obtained by the ultrasound probe. Execution then proceeds to step 540
  • step 540 the method 500 includes using the IMU data to determine the locations of the ultrasound probe, in a global coordinate system, at the times of capture for each of the ultrasound images. Execution then proceeds to step 550.
  • step 550 the method 500 includes registering the ultrasound image frames and the detected anatomy into the global coordinate system.
  • step 560 the method 500 includes generating an anatomical mapping in the global coordinate system.
  • step 570 the method 500 includes determining whether the imaging during the blind sweep protocol is complete (e.g., whether all of the anatomical ROIs have been fully imaged). If yes, execution proceeds to step 592. If No, execution proceeds to step 580.
  • the determination of complete vs. incomplete imaging can be made for example based on (a) whether a given anatomy or ROI is imaged at all, (b) whether a bounding box generated by the anatomy detector has a high or low confidence (e.g., a confidence above or below a specified threshold), (c) whether a bounding box generated by the anatomy detector occurs at the edge of an image frame, (d) whether a bounding box generated by the anatomy detector occurs in the last image frame of a sweep, (e) whether a standard plane classification can be performed (e.g., whether the standard plane can be found, with a confidence above a specified threshold) in at least some image frames of the sweep, (f) whether additional metrics calculated from the anatomical mapping (e.g., the gestational age of a fetus) can be computed (e.g., with a confidence above a specified threshold), or (g) other related criteria as would occur to a person of ordinary skill in the art.
  • a standard plane classification e.g., whether
  • the criteria for detecting a pre-defined standard anatomical plane may for example be is stored in a memory accessible to processor.
  • the pre-defined anatomical plane can be based on standards established by authorities in the field (physician organizations, sonographer organizations, etc.), published in scholarly journals/textbooks, etc. Examples include pre-defined anatomical plane of fetal head circumferences, pre-defined anatomical plane for fetal abdominal circumference, etc.
  • step 580 the method 500 includes determining the region(s) where the imaging is incomplete (e.g., locations within the global coordinate system where the incompletely images anatomy is located or should be located. Execution then proceeds to step 580.
  • the method 500 includes providing output (whether visual, auditory, haptic, or otherwise) that includes location guidance for additional sweep(s) required to complete the imaging for the blind sweep protocol (e.g., to complete the imaging of the incompletely imaged ROIs).
  • Such re-sweeps may include (a) repeating the entire blind sweep protocol, (b) repeating one or more sweeps of the protocol in the same locations as the previous sweeps, (c) repeating one or more sweeps of the protocol in slightly different locations from the previous sweeps, or (d) other related re-sweeps as would occur to a person of ordinary skill in the art (including for example partial sweeps, diagonal sweeps, highly localized sweeps, etc.).
  • the method 500 includes provide output (whether visual, auditory, haptic, or otherwise) including indication that imaging during blind sweep protocol is complete (e.g., that no further sweeps need to be performed).
  • the method 500 includes performing additional image processing to generate anatomical metrics (e.g., gestational age of a fetus).
  • anatomical metrics e.g., gestational age of a fetus.
  • the additional sweeps can significantly improve the anatomical metrics.
  • the additional image processing and/or the generated anatomical metrics are more accurate because they use more complete information from a more complete anatomical mapping (e.g., by obtaining additional anatomical information from additional sweeps to supplemental the anatomical information from the blind sweep protocol that may be missing, incomplete, degraded, etc.).
  • the processor may then for example output the anatomical metrics to a display.
  • the method 500 is now complete.
  • flow diagrams are provided herein for exemplary purposes; a person of ordinary skill in the art will recognize myriad variations that nonetheless fall within the scope of the present disclosure.
  • the logic of flow diagrams may be shown as sequential. However, similar logic could be parallel, massively parallel, object oriented, real-time, event-driven, cellular automaton, or otherwise, while accomplishing the same or similar functions.
  • a processor may divide each of the steps described herein into a plurality of machine instructions, and may execute these instructions at the rate of several hundred, several thousand, several million, or several billion per second, in a single processor or across a plurality of processors.
  • decoding IMU measurements into real-time position data may require a cycle time of 100 Hz, while in some aspects, real-time anatomy detection may occur at the frame rate of the ultrasound system (e.g., 20 Hz, 30 Hz, etc.).
  • FIG. 6 is a sweep progress screen display 600 of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure.
  • the sweep progress screen display 600 includes a stylized patient diagram 610 over which a sweep progress indicator 620 is overlaid.
  • the sweep progress indicator 620 includes an illuminated current sweep line 630, indicating the position and direction of the current sweep, and a position indicator 640.
  • the sweep progress screen display 600 also includes a control box 650 that provides instructions to the user, including a countdown timer 660, a cancel button 670 (e.g, to abort the current sweep), and an exit button 680 (e,gNeill to abort the entire blind sweep protocol).
  • FIG. 7 is a schematic, diagrammatic, perspective view of an imaging probe (e.g., an ultrasound imaging probe) 110 in contact with a body surface 740 of a patient, according to aspects of the present disclosure.
  • the probe 110 e.g., an IMU located inside the probe
  • the probe 110 includes its own local coordinate system, with a Y-axis 730 aligned with the long axis of the probe, and orthogonal X-axis 710 and Z-axis 720.
  • the probe 110 may have a different rotational velocity (e.g., rotational velocity 780) around each of the X, Y, and Z axes.
  • the rotational velocities can be integrated to yield a rotation angle around each axis, which represents the orientation of the probe 110 in space, according to the right-hand rule.
  • a global coordinate system that includes a Yg axis 770 and orthogonal Xg axis 750 and Zg axis 760.
  • the global coordinate system may for example align with the caudal-cranial axis, the left-right-axis, and the dorsal -ventral axis of the patient, although this will not always be the case.
  • an effort may be made to align the probe with the global coordinate system, it will not generally be the case that the probe coordinate system and the global coordinate system are identical.
  • FIG. 8 is a schematic, diagrammatic representation, in hybrid flow diagram / block diagram form, of an example ultrasound sweep guidance method 800, according to aspects of the present disclosure.
  • the method 800 includes receiving, into a sensor fusion sub-process 802, data streams from the IMU 115, which may include an accelerometer 140, gyroscope 150, and magnetometer 160, as described above.
  • the sensor fusion sub-process 802 may for example be a Kalman filter, machine learning network, or similar.
  • step 810 the method 800 includes accounting for any bias in the gyroscope 150. Execution then proceeds to step 815.
  • step 815 the method 800 optionally includes denoising the data stream from the gyroscope 150. This may be done for example with a highpass, lowpass, or bandpass filter, by averaging multiple data points, or by other means. Execution then proceeds to step 840.
  • step 820 the method 800 includes accounting for any bias in the accelerometer 140. Execution then proceeds to step 825.
  • step 825 the method 800 optionally includes denoising the data stream from the accelerometer. This may be done for example with a highpass, lowpass, or bandpass filter, by averaging multiple data points, or by other means. Execution then proceeds to step 840.
  • step 830 the method 800 includes accounting for any bias in the magnetometer 160. Execution then proceeds to step 832 and, in parallel, to step 835.
  • step 835 the method 800 optionally includes denoising the data stream from the magnetometer. This may be done for example with a highpass, lowpass, or bandpass filter, by averaging multiple data points, or by other means. Execution then proceeds to step 840.
  • step 840 the method 800 includes fusing the data streams from the gyroscope 150, accelerometer 140, and magnetometer 160, as described below. Execution then proceeds to step 850.
  • step 850 the method 800 includes determining the orientation, angular velocity, velocity, and/or position of the ultrasound probe, using the fused data streams. The method is now complete.
  • steps 810 through 850 are shown herein as individual steps, and may be performed as such, it should be appreciated that some aspects of the sensor fusion sub-process 802 may be capable of performing some or all of these functions in a reduced number of steps.
  • a Kalman filter or neural network can, in some instances, fuse noisy /biased sensor data in a way that yields the desired output vector.
  • the sampling can be unified. This may be accomplished by any synchronization algorithm known in the art (e.g., resampling one data stream to match the other, or by only using samples taken at the same timepoint, etc.).
  • the first two axes may for example be defined along the short and long axes of the probe, with the third axis being orthogonal to the first two axes.
  • the origin of this frame of reference may for example be the physical location of the IMU within the probe.
  • the vertical axis may be aligned with the gravitational vector, and the second two axes may be defined by the orientation of the probe at the acquisition start time (nominally aligned with the patient’s longitudinal and transverse axes).
  • the origin in the global reference frame may be the position of the probe at the acquisition start time.
  • the angular velocity measured by the gyroscope can be integrated to obtain the current IMU orientation.
  • angular velocity and acceleration data may be fused with an extended Kalman filter (EKF), Kalman filter, complementary filter, Madgwick filter, neural network, or other similar sensor fusion methodologies.
  • EKF extended Kalman filter
  • the output in this case would be more accurate estimates of orientation than from simply integrating the angular velocity.
  • changes in position derived from the IMU measurements may be relative to the initial location of the ultrasound probe. Initial position may be determined by anatomical landmarks on the patient (e.g., the patient’s bladder, pubic symphysis, etc.) or other means.
  • orientation can then be used to transform IMU acceleration to the global reference frame.
  • the gravitational acceleration is subtracted from the vertical axis acceleration.
  • acceleration can be single and double integrated, respectively.
  • the data streams from the gyroscope and accelerometer may be fused using a neural network.
  • a neural network Various methods for doing this have been described in the art, generally using a recurrent neural network (RNN)-based or temporal convolutional network (TCN)-based approach, which allow for the incorporation of temporal information. Incorporating temporal information can be crucial, both due to the time-series structure of the IMU data streams, and the tendency for IMU measurements to experience drift over time, which must be accounted for to obtain accurate measurements.
  • RNN recurrent neural network
  • TCN temporal convolutional network
  • Figure 9 is a schematic, diagrammatic illustration, in block diagram form, of an anatomy detection subsystem 900, according to aspects of the present disclosure.
  • An ultrasound video stream 910 comprising multiple frames 920 (whether real-time or recorded) is fed into the object detector 930.
  • the object detector 930 may implement or include any suitable type of learning network.
  • the object detector 930 could include a neural network, such as a convolutional neural network (CNN).
  • the convolutional neural network may additionally or alternatively be an encoder-decoder type network, or may utilize a backbone architecture based on other types of neural networks, such as an object detection network, classification network, etc.
  • One example backbone network is the Darknet YOLO backbone, (e.g., Yolov3) which can be used for object detection.
  • the CNN may for example include a set of N convolutional layers, where N may be any positive integer. Fully connected layers can be omitted when the CNN is a backbone.
  • the CNN may also include max pooling layers and/or activation layers.
  • Each convolutional layer may include a set of filters configured to extract features from an input (e.g., from a frame of the ultrasound video).
  • the value N and the size of the filters may vary depending on the aspects.
  • the convolutional layers may utilize any non-linear activation function, such as for example a leaky rectified non-linear (ReLU) activation function and/or batch normalization.
  • the max pooling layers gradually shrink the high-dimensional output to a dimension of the desired result (e.g., bounding boxes of a detected feature). Outputs of detection network may include numerous bounding boxes, with most having very low confidence scores and thus being filtered out or ignored.
  • Higher-confidence bounding boxes 965 indicate detection of a particular anatomical feature, such as the head or spine of a fetus.
  • Fully connected layers may be referred to as perception or perceptive layers.
  • perception/perceptive and/or fully connected layers may be found in object detector 910 (e.g., a multi-layer perceptron).
  • Outputs of the object detector 910 may include an annotated ultrasound video 940 made up of a plurality of annotated image frames 950, as well as per-frame metrics 960.
  • the per-frame metrics include the number of bounding boxes identified in the frame, the respective areas of the bounding boxes, and the respective confidence scores of each box.
  • the systems and methods disclosed herein are broadly applicable to different types of anatomical features, and can also be used to identify suspected imaging artifacts.
  • the object detector can be one class or multi-class, depending how the model is built. Otherwise, multiple feature classes can be identified, and enclosed in detection boxes, at the same time.
  • These separate single-class detectors may have the same architecture (e.g., layers and connections), but would have been trained with different data (e.g., different images and/or annotations).
  • the system could show the detection of different feature types in the figure by adding e.g. boxes with a black outline color.
  • the system could then calculate two separate metrics based on each type of detection to arrive at the video-level classification for the feature type.
  • the system could calculate a metric based on both/several types of features to arrive at a single video-level classification.
  • the image frames 950 of the ultrasound video 940 include two low-confidence bounding boxes 970 (e.g., bounding boxes with a confidence below 50%, or other threshold) and one edge-located bounding box 980.
  • Low-confidence bounding boxes 970 and edge-located bounding boxes 980 may be indications that the anatomy detected within the bounding box has not been completely or accurately imaged, and thus a new sweep over that location may be necessary.
  • FIG. 10 is a schematic, diagrammatic representation of an ultrasound video 1010 made up of multiple image frames 1020, according to aspects of the present disclosure.
  • Each image frame has a height 1030 (e.g., in millimeters or pixels), a width 1040 (e.g., in millimeters or pixels), and a time of capture 1050 (e.g., in seconds from the start of a sweep).
  • Each image frame 1020 also has a frame number (e.g., Frame 1, Frame 2, etc.) and an associated position in space (e.g., Xi, Yi, Zi; X2, Y2, Z2, etc.).
  • One or more of the frames 1020 may include one or more bounding boxes 965, as described above.
  • Figure 11 is a schematic, diagrammatic representation of a patient 300, according to aspects of the present disclosure. Visible on the abdomen 310 of the patient 300 are locations of ultrasound image frames 1010, including sweep 1 frames 1110, sweep 2 frames 1120, and sweep 3 frames 1130.
  • the image frames 1020 further include frames 1140 that include a first detected anatomy or ROI, and image frames 1150 that include a second detected anatomy or ROI.
  • Figure 12 is a schematic, diagrammatic representation of a patient 300, according to aspects of the present disclosure. Visible within the abdomen 310 of the patient 300 are a fetal head 1210, fetal spine 1220, fetal umbilical cord 1230, and mother’s navel 1240.
  • a blind sweep protocol may be intended to fully image each of these anatomical features, and portions of the blind sweep protocol may need to be repeated, either in the same locations or in slightly different locations, in order to complete the capture of all required features.
  • Figure 13 is a screen display 1300 of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure. Visible within the screen display are the abdomen 310 of the patient 300, fetal head 1210, fetal spine 1220, fetal umbilical cord 1230, and mother’s navel 1240. Also visible are dotted arrows Cl, C2, and C3 indicating the paths of three horizontal blind sweeps that occurred during the initial blind sweep protocol. However, in the example shown in Figure 13, sweep Cl is not well centered on the head 1210, and sweep C3 is not well positioned with respect to the spine 1220 and umbilical cord 1230. Thus, the system has determined that these two sweeps need to be performed again in slightly different positions.
  • the screen display shows two re-sweep lines, Cl-2 and C3- 2, showing the user where to sweep the ultrasound probe in order to complete the imaging of these ROIs.
  • the instructions to the user may include a visual indication of the position, direction, and length of the re-sweep on the body of the patient, and may also include information about the desired speed or duration of the sweep.
  • Figure 14 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method 1400, according to aspects of the present disclosure.
  • the method 1400 includes controlling (e.g., with a processor) a position detection system, to obtain position data simultaneously as ultrasound image frames obtained during blind sweep protocol.
  • the position detection system can be or include a magnetic coil in a fixed magnetic field, a 3D camera, a plurality of 2D cameras, one or more ultrasonic depth finders, or other system capable of determining the position of the probe in 3D space.
  • the method 1400 includes determining, using the position data, locations of the ultrasound probe while obtaining the ultrasound image frames.
  • the position data is representative of the positions of a plurality of positions of the ultrasound probe during the blind sweep protocol.
  • steps 1430 and 1440 may be performed as part of the method 500 of Figure 5, e.g., instead of or in addition to steps 530 and 540.
  • the position detection system may for example be or include an IMU.
  • a video feed of one or multiple cameras may be used to track the probe motion. Markers may be included on the probe to track its motion, or the intended path could be determined based on the images themselves. This may be accomplished with a variety of visual odometry/simultaneous localization and mapping (SLAM) approaches - for example, by tracking the centroid of the probe across the imaging field of known size at a known frame rate, or by using optical flow-based approaches.
  • SLAM visual odometry/simultaneous localization and mapping
  • a neural network could be used to extract the probe's position.
  • the network may be a convolutional neural network which is trained on many images of ultrasound probes with different known orientations moving in known directions on a marked surface. The network can learn the association between the visible markers in the consecutive video frames and probe position, or can learn based on human annotation of the video frames, indicating whether the probe position, angle, and velocity are compliant or
  • the position detection system of step 1430 may be or include a processing controller (e.g., a neural network similar to the neural network 930 of Figure 9) might be trained on the acquired ultrasound images or external camera images and the corresponding IMU data for each successfully accomplished sweep.
  • a variational encoder decoder architecture such as an RNN, or TCN, could be used to automatically distinguish the deviations from the successful sweeps.
  • the network may be fed raw ultrasound or external camera image frames, optical flow data derived from the images, or from some other preprocessing method.
  • the IMU data may also be provided as a raw input, pre-processed in the manner previously described, or by some other method.
  • the processing controller may be used to classify adherent vs.
  • the position detection system of step 1430 may be or include other tracking sensors and systems (e.g., magnetic tracking sensors and systems) to track the probe position.
  • tracking sensors and systems e.g., magnetic tracking sensors and systems
  • a fixed external magnetic field generator creates a local magnetic field.
  • three orthogonal magnetic coils are installed in or attached to the ultrasound probe. Voltage is induced in the coils as they pass through the magnetic field, from which position and orientation with respect to the magnetic field generator may be determined.
  • the detected anatomy may be used to track the probe position, using a neural network similar to neural network 930 of Figure 9, that has been trained for example using IMU data and ultrasound images of the patient anatomy, such that the trained neural network is able to determine a position in 3D space based on the ultrasound images (or the detected anatomy) alone.
  • the ultrasound sweep evaluation system advantageously permits untrained and minimally trained users to perform an ultrasound blind sweep protocol to gather anatomical images of high quality. This may result in higher accuracy and higher clinician trust in the results, while potentially decreasing the amount of time required to perform the sweeps.
  • the systems, methods, and devices described herein may be applicable in point of care and handheld ultrasound use cases such as with the Philips Lumify system.
  • the ultrasound sweep evaluation system can be used for any handheld imaging applications, including but not limited to obstetrics, lung imaging, and echocardiography.
  • the ultrasound sweep evaluation system could be deployed on handheld mobile ultrasound devices, and on portable or cart-based ultrasound systems.
  • the ultrasound sweep evaluation system can be used in a variety of settings including emergency departments, ambulances, accident sites, and homes.
  • the applications could also be expanded to other settings.
  • the functionality and output of the system may include the auditory, visual, and/or haptic feedback it provides in relation to the transducer motion.
  • This invention improves the functioning of the ultrasound imaging system in the obstetrics context, especially for use by minimally trained users.
  • the systems, methods, and devices described herein are not limited to obstetric ultrasound applications. Rather, the same technology can be applied to images of other organs or anatomical systems such as the lungs, heart, brain, digestive system, vascular system, etc.
  • the technology disclosed herein is also applicable to other medical imaging modalities where 3D data is available, such as other ultrasound applications, camerabased videos, X-ray videos, and 3D volume images, such as computer aided tomography (CT) scans, magnetic resonance imaging (MRI) scans, or optical coherence tomography (OCT) scans.
  • CT computer aided tomography
  • MRI magnetic resonance imaging
  • OCT optical coherence tomography
  • All directional references e.g., upper, lower, inner, outer, upward, downward, left, right, lateral, front, back, top, bottom, above, below, vertical, horizontal, clockwise, counterclockwise, proximal, and distal are only used for identification purposes to aid the reader’s understanding of the claimed subject matter, and do not create limitations, particularly as to the position, orientation, or use of the ultrasound sweep guidance system.
  • Connection references e.g., attached, coupled, connected, joined, or “in communication with” are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily imply that two elements are directly connected and in fixed relation to each other.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Physiology (AREA)
  • Theoretical Computer Science (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A system includes a processor configured for communication with an ultrasound probe and configured to: receive a series of ultrasound image frames from the ultrasound probe during a blind sweep protocol (a series of sweeps of the probe on a patient's body). The processor then receives positions of the ultrasound probe during the blind sweep protocol; detects an anatomical feature of the patient body in the plurality of image frames; and performs registration between the position data; and the detected anatomical feature or the series of ultrasound image frames. The processor then generates an anatomical mapping based on the registration, and determines if the blind sweep protocol is incomplete based on the anatomical mapping, the registration, the detected anatomical feature, or the series of ultrasound image frames. If the blind sweep protocol is incomplete, the processor outputs user guidance to perform an additional sweep on the patient's body.

Description

ULTRASOUND IMAGING WITH FOLLOW UP SWEEP GUIDANCE AFTER BLIND SWEEP PROTOCOL
FIELD
[0001] The subject matter described herein relates to devices, systems, and methods for using ultrasound probe position to guide a user to perform an additional sweep after completing a blind sweep protocol for ultrasound imaging.
BACKGROUND
[0002] Ultrasound imaging is often used for diagnostic purposes in an office or hospital setting, but may also used in resource-constrained care settings (e.g., homes, accident sites, ambulances, mobile health facilities, etc.) by emergency personnel, home health nurses, midwives, etc., who may lack ultrasound expertise. Blind sweep protocols are becoming increasingly common as a method to simplify the ultrasound image acquisition workflow for minimally trained users. Whereas in a typical guided sweep by a trained sonographer, the user must localize anatomical regions of interest (ROIs) by relying on their interpretation of the image output, the blind sweep protocol simply calls for sweeps along a pre-determined grid, without necessitating image understanding on the part of the user.
[0003] This blind sweep protocol greatly streamlines the image acquisition workflow for novice users (the targeted user group).
[0004] However, a blind sweep protocol is less likely to appropriately capture anatomical regions of interest (ROI; e.g., head standard plane or other pre-defined anatomical plane) when compared to guided sweeps by a trained sonographer. For example, anatomical ROIs may not be fully imaged. This has implications for many important screening measurements. For example, the Hadlock equation estimates depend on accurate biometry measurements of the fetus head standard plane to determine head circumference and biparietal diameter, and the abdominal standard plane to determine abdominal circumference. As a result, it may be more difficult to obtain accurate biometry measurements from blind sweeps. In another example, if certain anatomies are incompletely imaged or totally missed in the blind sweep, the automated parameter estimations from the blind sweep (such as but not limited to, gestational age (GA), deepest vertical pocket of the amniotic fluid (AF-DVP), placental location (PL), fetal presentation (FP)) may be inaccurate or indeterminable. In yet another example, if the placenta is found to be low-lying from the original blind sweep, an additional sweep including the cervix might be indicated to rule-in or rule-out placenta previa (PP). [0005] The information included in this Background section of the specification, including any references cited herein and any description or discussion thereof, is included for technical reference purposes only and is not to be regarded as subject matter by which the scope of the disclosure is to be bound.
SUMMARY
[0006] Disclosed are ultrasound sweep evaluation devices, systems, and methods that measure the quality of ROI capture following a blind ultrasound imaging sweep and provide clear instructions to the user about any sweeps that may need to be repeated, either in the same location or in slightly different locations. The ultrasound sweep evaluation system uses an inertial measurement unit (IMU) or other motion tracking methods to map the location of each ultrasound image frame, such that the frames can be registered into an anatomical mapping. Unlike existing blind sweep protocols, the ultrasound sweep evaluation system supplements blind sweeps with probe tracking, anatomy detection, and registration of the detected anatomy to the detected probe position. The system generates anatomical mapping of the detected anatomy based on the registration, and directs the user to perform follow-up sweeps based on regions of the anatomical mapping that are deemed to be incomplete. Such improved imaging not only increases the confidence of measurements or diagnoses made on the basis of the bind sweep protocol, but also has the potential to improve any downstream image processing used to generate additional metrics (e.g., fetal ultrasound evaluation metrics) derived from the image processing.
[0007] A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. One general aspect includes a system, which includes a processor configured for communication with an ultrasound probe, where the processor is configured to: receive a plurality of ultrasound image frames obtained by the ultrasound probe during a blind sweep protocol on a patient body, where the blind sweep protocol may include a plurality of sweeps of the ultrasound probe on the patient body; receive position data representative of a plurality of positions of the ultrasound probe during the blind sweep protocol; detect an anatomical feature of the patient body in the plurality of image frames; perform registration between: the position data; and at least one of the detected anatomical feature or the plurality of ultrasound image frames; generate an anatomical mapping based on the registration; determine if the blind sweep protocol is incomplete based on at least one of the anatomical mapping, the registration, the detected anatomical feature, or the plurality of ultrasound image frames; and if the blind sweep protocol is incomplete, output user guidance associated with an additional sweep on the patient body by the ultrasound probe to obtain an additional plurality of ultrasound image frames, where the additional sweep is different than the plurality of sweeps of the blind sweep protocol. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0008] Implementations may include one or more of the following features. In some aspects, the processor is configured to determine at least one of a location, a direction, a length, or a duration for the additional sweep, based the anatomical mapping. In some aspects, the user guidance may include a visual representation of at least one of the location, the direction, the length, or the duration for the additional sweep, and the processor is configured to output the user guidance to a display in communication with the processor. In some aspects, the system may include the display. In some aspects, the user guidance is overlaid on the anatomical mapping. In some aspects, the processor is configured to output the anatomical mapping to a display in communication with the processor. In some aspects, the user guidance may include auditory feedback, where the processor is configured to output the user guidance to a speaker in communication with the processor. In some aspects, the system may include the speaker. In some aspects, the user guidance may include haptic feedback, where the processor is configured to output the haptic feedback to a haptic motor in communication with the processor. In some aspects, the haptic motor is disposed within the ultrasound probe. In some aspects, the system may include the ultrasound probe. In some aspects, the ultrasound probe may include at least one of an accelerometer, a gyroscope, or a magnetometer disposed within the ultrasound probe. In some aspects, the processor is configured to receive the position data from at least one of the accelerometer, the gyroscope, or the magnetometer. In some aspects, the processor is configured to receive the position data from at least one at least one of the camera or the magnetic coil. In some aspects, the processor is configured to detect the anatomical feature using a trained machine learning model. In some aspects, to determine if the blind sweep protocol is complete, the processor is configured to perform at least one of: a determination of a plurality of anatomical features is detected; a determination of whether a first bounding box associated with detection of the anatomical feature occurs at an edge of a respective image frame; a determination of whether a second bounding box associated with the detection of the anatomical feature occurs in an ultrasound image frame obtained at an end of a sweep in the blind sweep protocol; a determination of whether a third bounding box of the detection of the anatomical feature may include a confidence that does not satisfy a first threshold, or a determination of whether a metric derived from image processing of the plurality of ultrasound image frames may include a confidence that does not satisfy a second threshold. In some aspects, the anatomical feature may include an anatomical feature of a fetus inside the patient body. In some aspects, to determine if the blind sweep protocol is complete, the processor is configured to determine of whether a pre-defined anatomical plane for ultrasound evaluation of the fetus is detected. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
[0009] One general aspect includes a method which includes receiving, with a processor in communication with ultrasound probe, a plurality of ultrasound image frames obtained by the ultrasound probe during a blind sweep protocol on a patient body, where the blind sweep protocol may include a plurality of sweeps of the ultrasound probe on the patient body. The method also includes receiving, with the processor, position data representative of a plurality of positions of the ultrasound probe during the blind sweep protocol. The method also includes detecting, with the processor, an anatomical feature of the patient body in the plurality of image frames. The method also includes performing, with the processor, registration between: the position data, and at least one of the detected anatomical feature or the plurality of ultrasound image frames. The method also includes generating, with the processor, an anatomical mapping based on the registration. The method also includes determining, with the processor, if the blind sweep protocol is incomplete based on at least one of the anatomical mapping, the registration, the detected anatomical feature, or the plurality of ultrasound image frames. The method also includes, if the blind sweep protocol is incomplete, outputting, with the processor, user guidance associated with an additional sweep on the patient body by the ultrasound probe to obtain an additional plurality of ultrasound image frames, where the additional sweep is different than the plurality of sweeps of the blind sweep protocol. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0010] Implementations may include one or more of the following features. In some aspects, the user guidance may include a visual representation of at least one of a location, a direction, a length, or a duration for the additional sweep, where the visual representation is overlaid on the anatomical mapping. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium. [0011] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to limit the scope of the claimed subject matter. A more extensive presentation of features, details, utilities, and advantages of the ultrasound sweep guidance system, as defined in the claims, is provided in the following written description of various aspects of the disclosure and illustrated in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] Illustrative aspects of the present disclosure will be described with reference to the accompanying drawings, of which:
[0013] Figure l is a schematic, diagrammatic representation of an ultrasound imaging system, according to aspects of the present disclosure.
[0014] Figure l is a schematic diagram of a processor circuit, according to aspects of the present disclosure.
[0015] Figure 3 is a schematic, diagrammatic representation of a patient, according to aspects of the present disclosure.
[0016] Figure 4 is a is a schematic, diagrammatic representation, in block diagram form, of an example ultrasound sweep evaluation system, according to aspects of the present disclosure.
[0017] Figure 5 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method, according to aspects of the present disclosure.
[0018] Figure 6 is a sweep progress screen display of an example ultrasound sweep evaluation system, according to aspects of the present disclosure.
[0019] Figure 7 is a schematic, diagrammatic, perspective view of an imaging probe (e.g., an ultrasound imaging probe) in contact with a body surface of a patient, according to aspects of the present disclosure.
[0020] Figure 8 is a schematic, diagrammatic representation, in hybrid flow diagram / block diagram form, of an example ultrasound sweep guidance method, according to aspects of the present disclosure.
[0021] Figure 9 is a schematic, diagrammatic illustration, in block diagram form, of an anatomy detection subsystem, according to aspects of the present disclosure.
[0022] Figure 10 is a schematic, diagrammatic representation of an ultrasound video 1010 made up of multiple image frames, according to aspects of the present disclosure.
[0023] Figure 11 is a schematic, diagrammatic representation of a patient, according to aspects of the present disclosure.
[0024] Figure 12 is a schematic, diagrammatic representation of a patient, according to aspects of the present disclosure.
[0025] Figure 13 is a screen display of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure. [0026] Figure 14 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method, according to aspects of the present disclosure.
DETAILED DESCRIPTION
[0027] In accordance with at least one aspect of the present disclosure, an ultrasound sweep evaluation system is provided that measures the quality of ROI capture following a blind imaging sweep, and provides clear instructions to the user about any sweeps that may need to be repeated, either in the same location or in slightly different locations. The ultrasound sweep evaluation system presents a novel approach to imaging quality control by, in some aspects, using an inertial measurement unit (IMU). Other aspects, instead or in addition, leverage alternative motion tracking methods. An IMU measures probe motion, and these data are leveraged to determine post-sweep acceptance/rej ection and repetition recommendations. The quality control methodology presented herein is intended for use with standalone, mobile, and cart-based ultrasound imaging systems, but can also be applied to other ultrasound systems and medical imaging and/or device guidance generally.
[0028] To simplify the imaging workflow for minimally trained users, the system instructs the user to perform imaging sweeps along a pre-determined grid on the mother’s abdomen (blind sweeps). This protocol is in contrast to the free hand sweeps that would be performed by a trained sonographer, during which the sonographer moves the probe as needed to localize relevant anatomical structures of interest. To optimize image quality from this blind sweep protocol by novice users, the motion sensor that is already included in some ultrasound probes may be leveraged to provide post-sweep quality control.
[0029] A blind sweep protocol is less likely to appropriately capture anatomical regions of interest (ROIs; e.g., head standard plane) when compared to guided sweeps by a trained sonographer. The performance of downstream medical analysis or Al-based analysis, which rely on these data for many applications (such as anatomy mapping e.g., to estimate fetal presentation, placenta location estimation and calculation of the maximum vertical pocket of amniotic fluid) may decrease as a result.
[0030] To mitigate this drawback, this invention describes a method to utilize information automatically gleaned from the blind sweep protocol to guide the user to acquire additional sweeps to capture information not present in the original blind sweeps. This is done by leveraging two technologies: inertial measurement unit (IMU) transducer motion tracking, and deep learning-based anatomical mapping. Using these two technologies, the system can identify frames containing relevant anatomical ROIs via anatomy mapping, and determine their locations within the blind sweep grid using IMU tracking data. [0031] An ROI can be classified as incompletely imaged (e.g., the detected anatomical feature within the ROI is incompletely images) if the bounding box associated with a particular anatomy detection is at the edge of the frame, is present in the last frame of the sweep, has a low classification confidence, does not contain specific features such as a standard plane or a measurable femur length, or other methods. The system may then direct the user to perform additional blind sweeps in the identified area. Incomplete imagery can also be characterized by the complete absence of a particular anatomy - in such cases, other detected anatomies can be used to prescribe a starting point for a new sweep in the attempt to find the anatomy in question (e.g., if the femur is not captured in the original blind sweep, detections of the head/spine/abdomen can suggest a narrowed-down search location for the femur). The user may be guided to the start point of the new targeted blind sweep grid by IMU-based feedback, graphical/ auditory /haptic demonstration in the user interface, or a combination thereof.
[0032] The ultrasound sweep evaluation system described herein mitigates these potential issues in the original blind sweeps, by automatically directing the user to perform follow up additional localized blind sweeps around anatomical ROIs, improving image data quality and thus increasing the value proposition of the ultrasound imaging platform, especially for novice users.
[0033] The ultrasound sweep evaluation system includes methods and devices to identify gaps in the image information gleaned from the blind sweep protocol. Gaps could refer to unscanned regions, partially covered anatomies, or completely missing anatomies. The system also includes methods and devices to estimate what user actions are needed to overcome these gaps (e.g., decide the boundaries/extent of an additional condensed sweep to capture missing information). Knowing what the gaps are, and where the missing information is expected to be present, can inform the generation of a “custom” re-scan protocol. An example re-scan protocol can be a condensed blind sweep in a specific area of the maternal abdomen (e.g., lower left quadrant, if the head is detected there in the original sweep but the number of detections are judged to be insufficient) to augment the information gleaned from the original set of blind sweeps). To facilitate both the original blind scan protocol and the re-scan protocol, the ultrasound sweep evaluation system also includes a user interface (UI) to communicate instructions, and possibly success metrics, to the user via graphics or text on the screen, via auditory or haptic feedback, or by other feedback mechanism(s). [0034] Anatomy detection includes a method to automatically detect anatomical ROIs within ultrasound images, such as head, spine, femur, etc. (e.g., object detection approaches based on the traditional image processing/computer vision methods or a deep learning detection/classification model). This feature may already be present in some ultrasound imaging systems.
[0035] Probe position tracking includes integrating acceleration and radial velocity measurements from an inertial measurement unit (IMU, whether internal or externally mounted), and/or other motion tracking or position tracking methods. By tracking the probe position, the system can determine the location of each ultrasound image frame, and therefore of the anatomical ROIs detected within the image frames. The position of missing information (missing anatomical ROIs and/or gaps between sweeps) can then be inferred, so that the user can be guided to perform follow-up sweeps. The probe motion tracking method also allows the user to be guided to the follow-up sweep location.
[0036] User guidance may be provided through a console, mobile app, etc. The feedback to the user may take the form of audio, visual, and/or haptic feedback to guide the transducer to the start point of the follow-up sweep. Another option (not mutually exclusive) is to provide a graphical illustration of the new scan grid in the user interface.
[0037] Some ultrasound probes already include an IMU that detects 3 -dimensional acceleration and angular velocity. Velocity and position are the first and second integrals of acceleration, respectively. Orientation is the first integral of angular velocity. Therefore, the IMU provides the ability to identify transducer motion and rotation. Optionally, an IMU that includes a magnetometer may be used, to provide even more accurate tracking by providing an invariant orientation reference in the Earth’s magnetic field, or other magnetic fields that may be present. The transducer position information determined by the IMU can be used to map each image frame to a location on the blind sweep grid. Movement and position information may optionally be provided by other mechanisms (e.g., electromagnetic tracking, video-based tracking, etc.).
[0038] In some aspects, the IMU must be calibrated prior to use. To perform calibration, the transducer containing the IMU may for example be placed in a fixed, known orientation, with data being logged for a set time period to obtain the biases along each axis. The calibration time may for example be determined based on the number of samples needed to reliably determine sensor bias, and may thus depend on the sampling rate of the IMU. When the IMU data are processed (either in real-time or post-sweep) the biases determined during calibration will thus be known. It is noted that sensor biases of the IMU may be accounted for when processing its measurements. This may be done by various methods known in the art, including by requiring a calibration period prior to use to determine the biases, or by automatically accounting for biases in a Kalman-filter-based or neural-network-based pose/motion prediction algorithm(s). Optionally, the data streams may also be denoised using various hardware and/or software methods known in the art (e.g., low-pass, high-pass or band-pass filters or filtering methods). The sensor streams are then fused, as described below, and used to compute an X, Y, Z position for the probe in a global coordinate system. [0039] Anatomy Mapping: Given that the imaging frame rate, IMU sampling rate, and transducer position/pose at each IMU sampling are known, each image frame and its related anatomy labels can be mapped to its corresponding location within the blind sweep grid. For any given anatomical feature, completeness can be determined in multiple ways. For example, if the bounding box for a feature is at the edge of a frame, or still present in the last frame of a sweep, it can be deduced that it was incompletely imaged. Given that the location of the image frame and the incomplete anatomy within the frame are both known, it can be predicted in which direction the sweep must be shifted to complete the anatomy.
[0040] Conditional Sweep Guidance: If incomplete anatomy is identified, the user is then prompted to perform follow up sweeps by the user interface. The new blind sweep specifications may be relayed to the user in multiple ways as well. For example, the IMU may be employed again here to guide the user with real-time feedback to the desired start point. This feedback can be provided in auditory, visual, or haptic form, or some combination of these. Alternatively, or in addition to this feedback, a map of the new scan grid can be shown graphically in the user interface. Feedback may be provided at the conclusion of each sweep, or at the conclusion of the blind sweep protocol.
[0041] The present disclosure aids substantially in the capture of high-quality ultrasound images by minimally trained users, by detecting the position of the ultrasound probe when each image frame is captured, providing post-sweep feedback to the user, and requiring that the user repeat any sweeps where anatomy of interest is not fully captured. Implemented on a processor in communication with an ultrasound probe, the ultrasound sweep evaluation system disclosed herein provides practical improvements in the quality of the captured images and the time required to capture them. This improved imaging transforms a process that is heavily reliant on professional experience into one that is accurate and repeatable even for minimally trained personnel, without the normally routine need to train clinicians such as emergency department personnel to perform quality control on the ultrasound images they capture. This unconventional approach improves the functioning of the ultrasound imaging system, by providing reliable, repeatable imaging in hospital, office, vehicle, field, and home settings.
[0042] The ultrasound sweep evaluation system may be implemented as a process at least partially viewable on a display, and operated by a control process executing on a processor that accepts user inputs from a keyboard, mouse, or touchscreen interface, and that is in communication with one or more sensor probes. In that regard, the control process performs certain specific operations in response to different inputs or selections made at different times. Certain structures, functions, and operations of the processor, display, sensors, and user input systems are known in the art, while others are recited herein to enable novel features or aspects of the present disclosure with particularity.
[0043] These descriptions are provided for exemplary purposes only, and should not be considered to limit the scope of the ultrasound sweep guidance system. Certain features may be added, removed, or modified without departing from the spirit of the claimed subject matter.
[0044] For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the aspects illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one aspect may be combined with the features, components, and/or steps described with respect to other aspects of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately.
[0045] Figure l is a schematic, diagrammatic representation of an ultrasound imaging system 100, according to aspects of the present disclosure. The ultrasound imaging system 100 may for example be used to acquire ultrasound video sweeps, which can then be analyzed by a human clinician or an artificial intelligence to diagnose medical conditions. [0046] The ultrasound imaging system 100 is used for scanning an area or volume of a subject’s body. A subject may include a patient of an ultrasound imaging procedure, or any other person, or any suitable living or non-living organism or structure. The ultrasound imaging system 100 includes an ultrasound imaging probe 110 in communication with a host 130 over a communication interface or link 120. The probe 110 may include a transducer array 112, a beamformer 114, a processor circuit 116, and a communication interface 118. The host 130 may include a display 132, a processor circuit 134, a communication interface 136, and a memory 138 storing subject information.
[0047] In some aspects, the probe 110 is an external ultrasound imaging device including a housing 111 configured for handheld operation by a user. The transducer array 112 can be configured to obtain ultrasound data while the user grasps the housing 111 of the probe 110 such that the transducer array 112 is positioned adjacent to or in contact with a subject’s skin. The probe 110 is configured to obtain ultrasound data of anatomy within the subject’s body while the probe 110 is positioned outside of the subject’s body for general imaging, such as for abdomen imaging, liver imaging, etc. In some aspects, the probe 110 can be an external ultrasound probe, a transthoracic probe, and/or a curved array probe.
[0048] In other aspects, the probe 110 can be an internal ultrasound imaging device and may comprise a housing 111 configured to be positioned within a lumen of a subject’s body for general imaging, such as for abdomen imaging, liver imaging, etc.. In some aspects, the probe 110 may be a curved array probe. Probe 110 may be of any suitable form for any suitable ultrasound imaging application including both external and internal ultrasound imaging.
[0049] In some aspects, the ultrasound probe 110 may include an inertial measurement unit (IMU) 115, which may in some cases include an accelerometer 140 (e.g., a 1-axis, 2-axis, or 3-axis accelerometer), a gyroscope 150 (e.g., a 1-axis, 2-axis, or 3-axis gyroscope), and/or a magnetometer 160 (e.g., a 1-axis, 2-axis, or 3-axis magnetometer). In some aspects, the probe 110 also includes a haptic motor 170 capable of, for example, vibrating the probe 110 to provide haptic feedback to a user holding the probe 110. In some aspects, the haptic motor 170 may be located outside of the probe 110, such as for example in a handheld host 130 (e.g., a smartphone or tablet computer).
[0050] In some aspects, aspects of the present disclosure can be implemented with medical images of subjects obtained using any suitable medical imaging device and/or modality. Examples of medical images and medical imaging devices include x-ray images (angiographic images, fluoroscopic images, images with or without contrast) obtained by an x-ray imaging device, computed tomography (CT) images obtained by a CT imaging device, positron emission tomography-computed tomography (PET-CT) images obtained by a PET- CT imaging device, magnetic resonance images (MRI) obtained by an MRI device, singlephoton emission computed tomography (SPECT) images obtained by a SPECT imaging device, optical coherence tomography (OCT) images obtained by an OCT imaging device, and intravascular photoacoustic (IVPA) images obtained by an IVPA imaging device. The medical imaging device can obtain the medical images while positioned outside the subject body, spaced from the subject body, adjacent to the subject body, in contact with the subject body, and/or inside the subject body.
[0051] For an ultrasound imaging device, the transducer array 112 emits ultrasound signals towards an anatomical object 105 of a subject and receives echo signals reflected from the object 105 back to the transducer array 112. The ultrasound transducer array 112 can include any suitable number of acoustic elements, including one or more acoustic elements and/or a plurality of acoustic elements. In some instances, the transducer array 112 includes a single acoustic element. In some instances, the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration. For example, the transducer array 112 can include between 1 acoustic element and 10000 acoustic elements, including values such as 2 acoustic elements, 4 acoustic elements, 36 acoustic elements, 64 acoustic elements, 128 acoustic elements, 500 acoustic elements, 812 acoustic elements, 1000 acoustic elements, 3000 acoustic elements, 8000 acoustic elements, and/or other values both larger and smaller. In some instances, the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration, such as a linear array, a planar array, a curved array, a curvilinear array, a circumferential array, an annular array, a phased array, a matrix array, a one-dimensional (ID) array, a 1.x dimensional array (e.g., a 1.5D array), or a two- dimensional (2D) array. The array of acoustic elements (e.g., one or more rows, one or more columns, and/or one or more orientations) can be uniformly or independently controlled and activated. The transducer array 112 can be configured to obtain one-dimensional, two- dimensional, and/or three-dimensional images of a subject’s anatomy. In some aspects, the transducer array 112 may include a piezoelectric micromachined ultrasound transducer (PMUT), capacitive micromachined ultrasonic transducer (CMUT), single crystal, lead zirconate titanate (PZT), PZT composite, other suitable transducer types, and/or combinations thereof.
[0052] The object 105 may include any anatomy or anatomical feature, such as a kidney, liver, and/or any other anatomy of a subject. The present disclosure can be implemented in the context of any number of anatomical locations and tissue types, including without limitation, organs including the liver, kidneys, gall bladder, pancreas, lungs; ducts; intestines; nervous system structures including the brain, dural sac, spinal cord and peripheral nerves; the urinary tract; as well as valves within the blood vessels, blood, abdominal organs, and/or other systems of the body. In some aspects, the object 105 may include malignancies such as tumors, cysts, lesions, hemorrhages, or blood pools within any part of human anatomy. The anatomy may be a blood vessel, as an artery or a vein of a subject’s vascular system, including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or any other suitable lumen inside the body. In addition to natural structures, the present disclosure can be implemented in the context of man-made structures such as, but without limitation, heart valves, stents, shunts, filters, implants and other devices.
[0053] The beamformer 114 is coupled to the transducer array 112. The beamformer 114 controls the transducer array 112, for example, for transmission of the ultrasound signals and reception of the ultrasound echo signals. In some aspects, the beamformer 114 may apply a time-delay to signals sent to individual acoustic transducers within an array in the transducer 112 such that an acoustic signal is steered in any suitable direction propagating away from the probe 110. The beamformer 114 may further provide image signals to the processor circuit 116 based on the response of the received ultrasound echo signals. The beamformer 114 may include multiple stages of beamforming. The beamforming can reduce the number of signal lines for coupling to the processor circuit 116. In some aspects, the transducer array 112 in combination with the beamformer 114 may be referred to as an ultrasound imaging component.
[0054] The processor 116 is coupled to the beamformer 114. The processor 116 is configured for communication with the ultrasound probe, and is configured to output the user guidance and other outputs described herein. The processor 116 may also be described as a processor circuit, which can include other components in communication with the processor 116, such as a memory, beamformer 114, communication interface 118, and/or other suitable components. The processor 116 may include a central processing unit (CPU), a graphical processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a controller, a field programmable gate array (FPGA) device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor 116 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The processor 116 is configured to process the beamformed image signals. For example, the processor 116 may perform filtering and/or quadrature demodulation to condition the image signals. The processor 116 and/or 134 can be configured to control the array 112 to obtain ultrasound data associated with the object 105. [0055] The communication interface 118 is coupled to the processor 116. The communication interface 118 may include one or more transmitters, one or more receivers, one or more transceivers, and/or circuitry for transmitting and/or receiving communication signals. The communication interface 118 can include hardware components and/or software components implementing a particular communication protocol suitable for transporting signals over the communication link 120 to the host 130. The communication interface 118 can be referred to as a communication device or a communication interface module.
[0056] The communication link 120 may be any suitable communication link. For example, the communication link 120 may be a wired link, such as a universal serial bus (USB) link or an Ethernet link. Alternatively, the communication link 120 may be a wireless link, such as an ultra-wideband (UWB) link, an Institute of Electrical and Electronics Engineers (IEEE) 802.11 WiFi link, or a Bluetooth link.
[0057] At the host 130, the communication interface 136 may receive the image signals. The communication interface 136 may be substantially similar to the communication interface 118. The host 130 may be any suitable computing and display device, such as a workstation, a personal computer (PC), a laptop, a tablet, or a mobile phone.
[0058] The processor 134 is coupled to the communication interface 136. The processor 134 may also be described as a processor circuit, which can include other components in communication with the processor 134, such as the memory 138, the communication interface 136, an optional speaker 139, and/or other suitable components. The processor 134 may be implemented as a combination of software components and hardware components. The processor 134 may include a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a controller, an FPGA device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor 134 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The processor 134 can be configured to generate image data from the image signals received from the probe 110. The processor 134 can apply advanced signal processing and/or image processing techniques to the image signals. In some aspects, the processor 134 can form a three-dimensional (3D) volume image from the image data. In some aspects, the processor 134 can perform real-time processing on the image data to provide a streaming video of ultrasound images of the object 105. In some aspects, the host 130 includes a beamformer. For example, the processor 134 can be part of and/or otherwise in communication with such a beamformer. The beamformer in the in the host 130 can be a system beamformer or a main beamformer (providing one or more subsequent stages of beamforming), while the beamformer 114 is a probe beamformer or micro-beamformer (providing one or more initial stages of beamforming).
[0059] The memory 138 is coupled to the processor 134. The memory 138 may be any suitable storage device, such as a cache memory (e.g., a cache memory of the processor 134), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, solid state drives, other forms of volatile and non-volatile memory, or a combination of different types of memory.
[0060] The memory 138 can be configured to store subject information, measurements, data, or files relating to a subject’s medical history, history of procedures performed, anatomical or biological features, characteristics, or medical conditions associated with a subject, computer readable instructions, such as code, software, or other application, as well as any other suitable information or data. The memory 138 may be located within the host 130. Subject information may include measurements, data, files, other forms of medical history, such as but not limited to ultrasound images, ultrasound videos, and/or any imaging information relating to the subject’s anatomy. The subject information may include parameters related to an imaging procedure such as an anatomical scan window, a probe orientation, and/or the subject position during an imaging procedure. The memory 138 can also be configured to store information related to the training and implementation of machine learning algorithms (e.g., trained neural networks or machine learning models) and/or information related to implementing image recognition algorithms for detecting/segmenting anatomy, image quantification algorithms, and/or image acquisition guidance algorithms, including those described herein.
[0061] The display 132 is coupled to the processor circuit 134. The display 132 may be a monitor or any suitable display. The display 132 is configured to display the ultrasound images, image videos, and/or any imaging information of the object 105.
[0062] The ultrasound imaging system 100 may be used to assist a sonographer in performing an ultrasound scan. The scan may be performed in a at a point-of-care setting. In some instances, the host 130 is a console or movable cart. In some instances, the host 130 may be a mobile device, such as a tablet, a mobile phone, or portable computer. During an imaging procedure, the ultrasound system can acquire an ultrasound image of a particular region of interest within a subject’s anatomy. The ultrasound imaging system 100 may then analyze the ultrasound image to identify various parameters associated with the acquisition of the image such as the scan window, the probe orientation, the subject position, and/or other parameters. The ultrasound imaging system 100 may then store the image and these associated parameters in the memory 138. At a subsequent imaging procedure, the ultrasound imaging system 100 may retrieve the previously acquired ultrasound image and associated parameters for display to a user which may be used to guide the user of the ultrasound imaging system 100 to use the same or similar parameters in the subsequent imaging procedure, as will be described in more detail hereafter.
[0063] In some aspects, the processor 134 may utilize deep learning-based prediction networks to identify parameters of an ultrasound image, including an anatomical scan window, probe orientation, subject position, and/or other parameters. In some aspects, the processor 134 may receive metrics or perform various calculations relating to the region of interest imaged or the subject’s physiological state during an imaging procedure. These metrics and/or calculations may also be displayed to the sonographer or other user via the display 132.
[0064] In some aspects, the host 130 may also include a speaker 180. The speaker 180 may for example be used to provide warning tones, beeps, or other auditory feedback to the user.
[0065] Before continuing, it should be noted that the examples described above are provided for purposes of illustration, and are not intended to be limiting. Other devices and/or device configurations may be utilized to carry out the operations described herein.
[0066] Figure l is a schematic diagram of a processor circuit 250, according to aspects of the present disclosure. The processor circuit 250 may be implemented in the ultrasound imaging system 100, or other devices or workstations (e.g., third-party workstations, network routers, etc.), or on a cloud processor or other remote processing unit, as necessary to implement the method. As shown, the processor circuit 250 may include a processor 260, a memory 264, and a communication module 268. These elements may be in direct or indirect communication with each other, for example via one or more buses.
[0067] The processor 260 may include a central processing unit (CPU), a digital signal processor (DSP), an ASIC, a controller, or any combination of general-purpose computing devices, reduced instruction set computing (RISC) devices, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other related logic devices, including mechanical and quantum computers. The processor 260 may also comprise another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor 260 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0068] The memory 264 may include a cache memory (e.g., a cache memory of the processor 260), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and nonvolatile memory, or a combination of different types of memory. In an aspect, the memory 264 includes a non-transitory computer-readable medium. The memory 264 may store instructions 266. The instructions 266 may include instructions that, when executed by the processor 260, cause the processor 260 to perform the operations described herein. Instructions 266 may also be referred to as code. The terms “instructions” and “code” should be interpreted broadly to include any type of computer-readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, subroutines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements.
[0069] The communication module 268 can include any electronic circuitry and/or logic circuitry to facilitate direct or indirect communication of data between the processor circuit 250, and other processors or devices. In that regard, the communication module 268 can be an input/output (I/O) device. In some instances, the communication module 268 facilitates direct or indirect communication between various elements of the processor circuit 250 and/or the ultrasound imaging system 100. The communication module 268 may communicate within the processor circuit 250 through numerous methods or protocols. Serial communication protocols may include but are not limited to United States Serial Protocol Interface (US SPI), Inter-Integrated Circuit (I2C), Recommended Standard 232 (RS- 232), RS-485, Controller Area Network (CAN), Ethernet, Aeronautical Radio, Incorporated 429 (ARINC 429), MODBUS, Military Standard 1553 (MIL-STD-1553), or any other suitable method or protocol. Parallel protocols include but are not limited to Industry Standard Architecture (ISA), Advanced Technology Attachment (ATA), Small Computer System Interface (SCSI), Peripheral Component Interconnect (PCI), Institute of Electrical and Electronics Engineers 488 (IEEE-488), IEEE-1284, and other suitable protocols. Where appropriate, serial and parallel communications may be bridged by a Universal Asynchronous Receiver Transmitter (UART), Universal Synchronous Receiver Transmitter (USART), or other appropriate subsystem.
[0070] External communication (including but not limited to software updates, firmware updates, model sharing between the processor and central server, or readings from the ultrasound imaging system 100) may be accomplished using any suitable wireless or wired communication technology, such as a cable interface such as a universal serial bus (USB), micro USB, Lightning, or FireWire interface, Bluetooth, Wi-Fi, ZigBee, Li-Fi, or cellular data connections such as 2G/GSM (global system for mobiles) , 3G/UMTS (universal mobile telecommunications system), 4G, long term evolution (LTE), WiMax, or 5G. For example, a Bluetooth Low Energy (BLE) radio can be used to establish connectivity with a cloud service, for transmission of data, and for receipt of software patches. The controller may be configured to communicate with a remote server, or a local device such as a laptop, tablet, or handheld device, or may include a display capable of showing status variables and other information. Information may also be transferred on physical media such as a USB flash drive or memory stick.
[0071] Figure 3 is a schematic, diagrammatic representation of a patient 300, according to aspects of the present disclosure. Visible on the abdomen 310 of the patient 300 is a desired sweep pattern 320 intended to capture images of desired features of the patient’s anatomy. The sweep pattern 320 consists of multiple vertical sweep lines 330 and multiple horizontal sweep lines 340. Each sweep line 330, 340 represents a desired path for one imaging sweep of the abdomen 310. In the example shown in Figure 3, the sweep pattern consists of three vertical sweep lines 330, all in an upward direction with respect to the patient, and three horizontal sweep lines 340, all in a left-right direction with respect to the patient. However, it is understood that a sweep pattern 320 may include more or fewer sweep lines 330, including vertical sweep lines 330, horizontal sweep lines 330, or combinations thereof, in any combination of upward, downward, left, or right directions. Furthermore, a sweep pattern 320 may cover other portions of the patient’s body, including but not limited to the head, neck, spine, limbs, etc.
[0072] Figure 4 is a schematic, diagrammatic representation, in block diagram form, of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure. In the example shown in Figure 4, the ultrasound sweep guidance system, 400 acquires ultrasound images 410 from one or more image sweeps, wherein each sweep may for example contain 30 images per second over a 10-second recording period, for a total of 300 image frames. The ultrasound sweep evaluation system 400 may include an IMU 420 to record the motion of the ultrasound imaging probe. Either or both of the ultrasound images 410 and the motion data from the IMU 420 may be used by a signal processing unit 430 to analyze the motion of the probe and determine whether it is compliant with the desired blind sweep protocol.
[0073] In some cases, the signal processing unit may include a sensor fusion algorithm such as a Kalman filter or extended Kalman filter. A Kalman filter uses linear quadratic estimation (LQE) to turn a series of noisy measurements observed over time into a series of accurate estimates for the true values of the variables. For a variable x at time k, the Kalman filter includes a state transition model Fk, a control-input model Bk, and a control vector Uk, which collectively describe the evolving system model at time k, such that: xk = Fk*k-i + Bkuk + wk (EQN. 1) where Wk is the (Gaussian) system noise around variable x at time k. The system may include hidden variables, and may reduce a large number of input variables into a smaller number of output variables such that, for example, 6-DOF IMU readings (x, y, and z linear accelerations, and rotation rates around the x, y, and z axes) and 3 -DOF magnetometer readings (x, y, and z magnetic field strengths) may yield a 6-element output vector (x, y, and z positions and angles). In other exemplary cases, the Kalman filter may produce more output variables than there are inputs. In an example, the same 6-DOF IMU readings may yield a 15-element output of x, y, and z accelerations, x, y, and z velocities, x, y, and z positions, x, y, and z rotation rates, and x, y, and z angles.
[0074] An extended Kalman filter (EKF) is a nonlinear version of a standard (linear) Kalman filter, which linearizes about an estimate of the current mean and covariance, and is frequently used in well-defined transition models such as navigation based on multiple unrelated input variables. In the EKF, the state transition and observation models don't need to be linear functions of the state but may instead be differentiable functions. In an example, the EKF may use multivariate Taylor series expansions to linearize a model about a working point at any given time k. If the system model is not well known or is inaccurate, then Monte Carlo methods may be employed within the EKF to generate variance around the input values, such that a mean prediction for each output variable can be extracted.
[0075] In some aspects, the signal processing unit 430 may employ other types of sensor fusion algorithms, including but not limited to a complementary filter, Madgwick filter, or a learning network. [0076] In an example, the signal processing unit 430 may implement or include any suitable type of learning network. For example, in some aspects, the signal processing unit 430 could include a neural network, such as a recurrent neural network (RNN) or temporal convolutional network (TCN) that includes temporal information. In addition, the neural network may additionally or alternatively be an encoder-decoder type network, or may utilize a backbone architecture based on other types of neural networks, such as an object detection network (if images are also used), classification network, etc. The TCN may for example include a set of N convolutional layers, where N may be any positive integer. Fully connected layers can be omitted when the TCN is a backbone. The TCN may also include max pooling layers and/or activation layers. Each convolutional layer may include a set of filters configured to extract features from an input (e.g., from an axis of the IMU). The value N and the size of the filters may vary depending on the aspects. In some instances, the convolutional layers may utilize any non-linear activation function, such as for example a leaky rectified linear unit (ReLU) activation function. In some instances, normalization/regularization methods may be used, such as batch normalization. The max pooling layers gradually adjust the dimension of the input variables (e.g., 6-DOF or 9-DOF IMU outputs) to a dimension of the desired result (e.g., a 15-element probe motion vector). Fully connected layers may be referred to as perception or perceptive layers. In some aspects, fully connected layers may be found in the neural network.
[0077] These descriptions are included for exemplary purposes; a person of ordinary skill in the art will appreciate that other types of learning models, with features similar to or dissimilar to those described above, may be used instead or in addition, without departing from the spirit of the present disclosure.
[0078] The systems and methods disclosed herein are broadly applicable to different types of features, and can for example reduce the effects of noise, bias, drift, vibration, shock, or other error sources in one or more sensors or non-sensor inputs. In some aspects, inputs to the signal processing unit 430 may also include classified ultrasound image features, orientations derived from ultrasound image features, camera images of the probe, camera images from the probe, or positioning system inputs external to the ultrasound probe, including but not limited to global positioning system (GPS) inputs.
[0079] A user interface 440 provides instructions to the user regarding the blind-sweep protocol, as well as feedback on whether (and to what degree) the motions of the probe are compliant with the protocol. Feedback may include real-time feedback 550, including live deviation alerts provided during the sweep, and post-sweep feedback 460, indicating whether that sweep was compliant or non-compliant, and in some cases may also include statistics or other information regarding which aspects of the sweep were non-compliant. In some cases, a non-compliant sweep may be rejected, and the user may be asked to repeat that sweep.
[0080] Once all necessary compliant sweeps have been accepted, the signal processing unit 430 may produce a clinical report 470. Depending on the implementation, the clinical report may include 2D or 3D images, anatomical measurements or statistics, and other clinically relevant information. In some cases, the clinical report 470 may also include performance statistics on the user who performed the sweeps. In some cases, the content of the clinical report 470 may vary depending on geographic region, on the type of anatomy being imaged, or on other factors.
[0081] The clinical report 470 may, in some cases, be available to the user of the ultrasound sweep evaluation system 400, who may for example be a physician. In other cases, the user may be a paramedic, midwife, or other person not specifically trained in the interpretation of ultrasound images. In such cases, the clinical report 470 may be stored locally, may be transmitted over a network for review by a remote physician, or may be passed to a downstream Al tool for further analysis.
[0082] It is noted that block diagrams are provided herein for exemplary purposes; a person of ordinary skill in the art will recognize myriad variations that nonetheless fall within the scope of the present disclosure. For example, block diagrams may show a particular arrangement of components, modules, services, steps, processes, or layers, resulting in a particular data flow. It is understood that some aspects of the systems disclosed herein may include additional components, that some components shown may be absent from some aspects, and that the arrangement of components may be different than shown, resulting in different data flows while still performing the methods described herein.
[0083] Figure 5 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method 500, according to aspects of the present disclosure. It is understood that the steps of method 500 may be performed in a different order than shown in Figure 5, additional steps can be provided before, during, and after the steps, and/or some of the steps described can be replaced or eliminated in other embodiments. One or more of steps of the method 500 can be carried by one or more devices and/or systems described herein, such as components of the ultrasound imaging system 100, ultrasound sweep evaluation system 400, and/or processor circuit 250. [0084] In step 510, the method 500 includes controlling (e.g., with a processor) the ultrasound probe to obtain ultrasound image frames of the patient’s anatomy during the blind weep protocol. Execution then proceeds to step 520.
[0085] In step 520, the method 500 includes performing anatomy detection in the ultrasound image frames, as described below. Execution then proceeds to step 550.
[0086] In step 530, the method 500 includes controlling (e.g., with a processor) the IMU to obtain IMU data at the same time (though not necessarily the same rate) the ultrasound image frames are being obtained by the ultrasound probe. Execution then proceeds to step 540
[0087] In step 540, the method 500 includes using the IMU data to determine the locations of the ultrasound probe, in a global coordinate system, at the times of capture for each of the ultrasound images. Execution then proceeds to step 550.
[0088] In step 550, the method 500 includes registering the ultrasound image frames and the detected anatomy into the global coordinate system.
[0089] In step 560, the method 500 includes generating an anatomical mapping in the global coordinate system.
[0090] In step 570, the method 500 includes determining whether the imaging during the blind sweep protocol is complete (e.g., whether all of the anatomical ROIs have been fully imaged). If yes, execution proceeds to step 592. If No, execution proceeds to step 580.
[0091] The determination of complete vs. incomplete imaging can be made for example based on (a) whether a given anatomy or ROI is imaged at all, (b) whether a bounding box generated by the anatomy detector has a high or low confidence (e.g., a confidence above or below a specified threshold), (c) whether a bounding box generated by the anatomy detector occurs at the edge of an image frame, (d) whether a bounding box generated by the anatomy detector occurs in the last image frame of a sweep, (e) whether a standard plane classification can be performed (e.g., whether the standard plane can be found, with a confidence above a specified threshold) in at least some image frames of the sweep, (f) whether additional metrics calculated from the anatomical mapping (e.g., the gestational age of a fetus) can be computed (e.g., with a confidence above a specified threshold), or (g) other related criteria as would occur to a person of ordinary skill in the art.
[0092] It is noted that the criteria for detecting a pre-defined standard anatomical plane may for example be is stored in a memory accessible to processor. The pre-defined anatomical plane can be based on standards established by authorities in the field (physician organizations, sonographer organizations, etc.), published in scholarly journals/textbooks, etc. Examples include pre-defined anatomical plane of fetal head circumferences, pre-defined anatomical plane for fetal abdominal circumference, etc.
[0093] In step 580, the method 500 includes determining the region(s) where the imaging is incomplete (e.g., locations within the global coordinate system where the incompletely images anatomy is located or should be located. Execution then proceeds to step 580.
[0094] In step 590, the method 500 includes providing output (whether visual, auditory, haptic, or otherwise) that includes location guidance for additional sweep(s) required to complete the imaging for the blind sweep protocol (e.g., to complete the imaging of the incompletely imaged ROIs). Such re-sweeps may include (a) repeating the entire blind sweep protocol, (b) repeating one or more sweeps of the protocol in the same locations as the previous sweeps, (c) repeating one or more sweeps of the protocol in slightly different locations from the previous sweeps, or (d) other related re-sweeps as would occur to a person of ordinary skill in the art (including for example partial sweeps, diagonal sweeps, highly localized sweeps, etc.).
[0095] Execution then returns to step 510.
[0096] In step 592, the method 500 includes provide output (whether visual, auditory, haptic, or otherwise) including indication that imaging during blind sweep protocol is complete (e.g., that no further sweeps need to be performed).
[0097] In step 594, the method 500 includes performing additional image processing to generate anatomical metrics (e.g., gestational age of a fetus). One advantage of the ultrasound sweep evaluation method 500 is that the additional sweeps can significantly improve the anatomical metrics. In an example, the additional image processing and/or the generated anatomical metrics are more accurate because they use more complete information from a more complete anatomical mapping (e.g., by obtaining additional anatomical information from additional sweeps to supplemental the anatomical information from the blind sweep protocol that may be missing, incomplete, degraded, etc.).
[0098] The processor may then for example output the anatomical metrics to a display. The method 500 is now complete.
[0099] It is noted that flow diagrams are provided herein for exemplary purposes; a person of ordinary skill in the art will recognize myriad variations that nonetheless fall within the scope of the present disclosure. For example, the logic of flow diagrams may be shown as sequential. However, similar logic could be parallel, massively parallel, object oriented, real-time, event-driven, cellular automaton, or otherwise, while accomplishing the same or similar functions. In order to perform the methods described herein, a processor may divide each of the steps described herein into a plurality of machine instructions, and may execute these instructions at the rate of several hundred, several thousand, several million, or several billion per second, in a single processor or across a plurality of processors. Such rapid execution may be necessary in order to execute the method in real time or near-real time as described herein. For example, decoding IMU measurements into real-time position data may require a cycle time of 100 Hz, while in some aspects, real-time anatomy detection may occur at the frame rate of the ultrasound system (e.g., 20 Hz, 30 Hz, etc.).
[00100] Figure 6 is a sweep progress screen display 600 of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure. In the example shown in Figure 6, the sweep progress screen display 600 includes a stylized patient diagram 610 over which a sweep progress indicator 620 is overlaid. The sweep progress indicator 620 includes an illuminated current sweep line 630, indicating the position and direction of the current sweep, and a position indicator 640. The sweep progress screen display 600 also includes a control box 650 that provides instructions to the user, including a countdown timer 660, a cancel button 670 (e.g, to abort the current sweep), and an exit button 680 (e,g„ to abort the entire blind sweep protocol).
[00101] Figure 7 is a schematic, diagrammatic, perspective view of an imaging probe (e.g., an ultrasound imaging probe) 110 in contact with a body surface 740 of a patient, according to aspects of the present disclosure. The probe 110 (e.g., an IMU located inside the probe) includes its own local coordinate system, with a Y-axis 730 aligned with the long axis of the probe, and orthogonal X-axis 710 and Z-axis 720. The probe 110 may have a different rotational velocity (e.g., rotational velocity 780) around each of the X, Y, and Z axes. The rotational velocities can be integrated to yield a rotation angle around each axis, which represents the orientation of the probe 110 in space, according to the right-hand rule.
[00102] Also visible is a global coordinate system that includes a Yg axis 770 and orthogonal Xg axis 750 and Zg axis 760. The global coordinate system may for example align with the caudal-cranial axis, the left-right-axis, and the dorsal -ventral axis of the patient, although this will not always be the case. Furthermore, although an effort may be made to align the probe with the global coordinate system, it will not generally be the case that the probe coordinate system and the global coordinate system are identical. Thus, while ultrasound images are inherently captured relative to the probe’s coordinate system, it may be desirable to rotate the position, velocity, pose angles, and rotation rates of the probe 110 into the global coordinate system to determine whether the probe motion is compliant with the blind-sweep protocol. [00103] Figure 8 is a schematic, diagrammatic representation, in hybrid flow diagram / block diagram form, of an example ultrasound sweep guidance method 800, according to aspects of the present disclosure.
[00104] The method 800 includes receiving, into a sensor fusion sub-process 802, data streams from the IMU 115, which may include an accelerometer 140, gyroscope 150, and magnetometer 160, as described above. The sensor fusion sub-process 802 may for example be a Kalman filter, machine learning network, or similar.
[00105] In step 810, the method 800 includes accounting for any bias in the gyroscope 150. Execution then proceeds to step 815.
[00106] In step 815, the method 800 optionally includes denoising the data stream from the gyroscope 150. This may be done for example with a highpass, lowpass, or bandpass filter, by averaging multiple data points, or by other means. Execution then proceeds to step 840.
[00107] In step 820, the method 800 includes accounting for any bias in the accelerometer 140. Execution then proceeds to step 825.
[00108] In step 825, the method 800 optionally includes denoising the data stream from the accelerometer. This may be done for example with a highpass, lowpass, or bandpass filter, by averaging multiple data points, or by other means. Execution then proceeds to step 840.
[00109] In step 830, the method 800 includes accounting for any bias in the magnetometer 160. Execution then proceeds to step 832 and, in parallel, to step 835.
[00110] In step 835, the method 800 optionally includes denoising the data stream from the magnetometer. This may be done for example with a highpass, lowpass, or bandpass filter, by averaging multiple data points, or by other means. Execution then proceeds to step 840.
[00111] In step 840, the method 800 includes fusing the data streams from the gyroscope 150, accelerometer 140, and magnetometer 160, as described below. Execution then proceeds to step 850.
[00112] In step 850, the method 800 includes determining the orientation, angular velocity, velocity, and/or position of the ultrasound probe, using the fused data streams. The method is now complete.
[00113] It is noted that, although steps 810 through 850 are shown herein as individual steps, and may be performed as such, it should be appreciated that some aspects of the sensor fusion sub-process 802 may be capable of performing some or all of these functions in a reduced number of steps. For example, a Kalman filter or neural network can, in some instances, fuse noisy /biased sensor data in a way that yields the desired output vector. [00114] Regarding the fusion of the sensor streams, if for example the gyroscope and accelerometer have different sampling frequencies, the sampling can be unified. This may be accomplished by any synchronization algorithm known in the art (e.g., resampling one data stream to match the other, or by only using samples taken at the same timepoint, etc.). Once sampling frequencies are unified, there are a number of approaches to perform sensor fusion, with the common trait but not limited to the assumption that the rotational information provided by the gyroscope is used to transform the accelerometer measurements from the IMU’s frame of reference to a global frame of reference. In the IMU frame of reference, the first two axes may for example be defined along the short and long axes of the probe, with the third axis being orthogonal to the first two axes. The origin of this frame of reference may for example be the physical location of the IMU within the probe. In the global reference frame, the vertical axis may be aligned with the gravitational vector, and the second two axes may be defined by the orientation of the probe at the acquisition start time (nominally aligned with the patient’s longitudinal and transverse axes). The origin in the global reference frame may be the position of the probe at the acquisition start time.
[00115] In the simplest sensor fusion approach, the angular velocity measured by the gyroscope can be integrated to obtain the current IMU orientation. Alternatively, angular velocity and acceleration data may be fused with an extended Kalman filter (EKF), Kalman filter, complementary filter, Madgwick filter, neural network, or other similar sensor fusion methodologies. The output in this case would be more accurate estimates of orientation than from simply integrating the angular velocity. Note that changes in position derived from the IMU measurements may be relative to the initial location of the ultrasound probe. Initial position may be determined by anatomical landmarks on the patient (e.g., the patient’s bladder, pubic symphysis, etc.) or other means.
[00116] Once orientation is obtained, it can then be used to transform IMU acceleration to the global reference frame. In the global reference frame, the gravitational acceleration is subtracted from the vertical axis acceleration. To obtain velocity and position (relative to acquisition start time), acceleration can be single and double integrated, respectively.
[00117] In still another aspect, the data streams from the gyroscope and accelerometer may be fused using a neural network. Various methods for doing this have been described in the art, generally using a recurrent neural network (RNN)-based or temporal convolutional network (TCN)-based approach, which allow for the incorporation of temporal information. Incorporating temporal information can be crucial, both due to the time-series structure of the IMU data streams, and the tendency for IMU measurements to experience drift over time, which must be accounted for to obtain accurate measurements. Following sensor fusion by a neural network-based method, the quality control method may be implemented as previously described.
[00118] Figure 9 is a schematic, diagrammatic illustration, in block diagram form, of an anatomy detection subsystem 900, according to aspects of the present disclosure. An ultrasound video stream 910 comprising multiple frames 920 (whether real-time or recorded) is fed into the object detector 930.
[00119] The object detector 930 may implement or include any suitable type of learning network. For example, in some aspects, the object detector 930 could include a neural network, such as a convolutional neural network (CNN). In addition, the convolutional neural network may additionally or alternatively be an encoder-decoder type network, or may utilize a backbone architecture based on other types of neural networks, such as an object detection network, classification network, etc. One example backbone network is the Darknet YOLO backbone, (e.g., Yolov3) which can be used for object detection. The CNN may for example include a set of N convolutional layers, where N may be any positive integer. Fully connected layers can be omitted when the CNN is a backbone. The CNN may also include max pooling layers and/or activation layers. Each convolutional layer may include a set of filters configured to extract features from an input (e.g., from a frame of the ultrasound video). The value N and the size of the filters may vary depending on the aspects. In some instances, the convolutional layers may utilize any non-linear activation function, such as for example a leaky rectified non-linear (ReLU) activation function and/or batch normalization. The max pooling layers gradually shrink the high-dimensional output to a dimension of the desired result (e.g., bounding boxes of a detected feature). Outputs of detection network may include numerous bounding boxes, with most having very low confidence scores and thus being filtered out or ignored. Higher-confidence bounding boxes 965 indicate detection of a particular anatomical feature, such as the head or spine of a fetus. Fully connected layers may be referred to as perception or perceptive layers. In some aspects, perception/perceptive and/or fully connected layers may be found in object detector 910 (e.g., a multi-layer perceptron).
[00120] These descriptions are included for exemplary purposes; a person of ordinary skill in the art will appreciate that other types of learning models, with features similar to or dissimilar to those described above, may be used instead or in addition, without departing from the spirit of the present disclosure.
[00121] Outputs of the object detector 910 may include an annotated ultrasound video 940 made up of a plurality of annotated image frames 950, as well as per-frame metrics 960. In the example shown in Figure 9, the per-frame metrics include the number of bounding boxes identified in the frame, the respective areas of the bounding boxes, and the respective confidence scores of each box.
[00122] The systems and methods disclosed herein are broadly applicable to different types of anatomical features, and can also be used to identify suspected imaging artifacts. The object detector can be one class or multi-class, depending how the model is built. Otherwise, multiple feature classes can be identified, and enclosed in detection boxes, at the same time. One can either train/run a single detector that detects multiple feature types (a "multi-class detector") and provides their locations as an output, along with the confidence score and feature type (class) of each detection. Alternatively, one could run several "singleclass" detectors, each trained to detect a single feature type/class. These separate single-class detectors may have the same architecture (e.g., layers and connections), but would have been trained with different data (e.g., different images and/or annotations). The system could show the detection of different feature types in the figure by adding e.g. boxes with a black outline color. The system could then calculate two separate metrics based on each type of detection to arrive at the video-level classification for the feature type. Alternatively, the system could calculate a metric based on both/several types of features to arrive at a single video-level classification.
[00123] In the example shown in Figure 9, the image frames 950 of the ultrasound video 940 include two low-confidence bounding boxes 970 (e.g., bounding boxes with a confidence below 50%, or other threshold) and one edge-located bounding box 980. Low-confidence bounding boxes 970 and edge-located bounding boxes 980 may be indications that the anatomy detected within the bounding box has not been completely or accurately imaged, and thus a new sweep over that location may be necessary.
[00124] Figure 10 is a schematic, diagrammatic representation of an ultrasound video 1010 made up of multiple image frames 1020, according to aspects of the present disclosure. Each image frame has a height 1030 (e.g., in millimeters or pixels), a width 1040 (e.g., in millimeters or pixels), and a time of capture 1050 (e.g., in seconds from the start of a sweep). Each image frame 1020 also has a frame number (e.g., Frame 1, Frame 2, etc.) and an associated position in space (e.g., Xi, Yi, Zi; X2, Y2, Z2, etc.). One or more of the frames 1020 may include one or more bounding boxes 965, as described above.
[00125] Figure 11 is a schematic, diagrammatic representation of a patient 300, according to aspects of the present disclosure. Visible on the abdomen 310 of the patient 300 are locations of ultrasound image frames 1010, including sweep 1 frames 1110, sweep 2 frames 1120, and sweep 3 frames 1130. The image frames 1020 further include frames 1140 that include a first detected anatomy or ROI, and image frames 1150 that include a second detected anatomy or ROI.
[00126] Figure 12 is a schematic, diagrammatic representation of a patient 300, according to aspects of the present disclosure. Visible within the abdomen 310 of the patient 300 are a fetal head 1210, fetal spine 1220, fetal umbilical cord 1230, and mother’s navel 1240. In an example, a blind sweep protocol may be intended to fully image each of these anatomical features, and portions of the blind sweep protocol may need to be repeated, either in the same locations or in slightly different locations, in order to complete the capture of all required features.
[00127] Figure 13 is a screen display 1300 of an example ultrasound sweep evaluation system 400, according to aspects of the present disclosure. Visible within the screen display are the abdomen 310 of the patient 300, fetal head 1210, fetal spine 1220, fetal umbilical cord 1230, and mother’s navel 1240. Also visible are dotted arrows Cl, C2, and C3 indicating the paths of three horizontal blind sweeps that occurred during the initial blind sweep protocol. However, in the example shown in Figure 13, sweep Cl is not well centered on the head 1210, and sweep C3 is not well positioned with respect to the spine 1220 and umbilical cord 1230. Thus, the system has determined that these two sweeps need to be performed again in slightly different positions. Thus, the screen display shows two re-sweep lines, Cl-2 and C3- 2, showing the user where to sweep the ultrasound probe in order to complete the imaging of these ROIs. Thus, the instructions to the user may include a visual indication of the position, direction, and length of the re-sweep on the body of the patient, and may also include information about the desired speed or duration of the sweep.
[00128] Figure 14 is a schematic, diagrammatic representation, in flow diagram form, of an example ultrasound sweep evaluation method 1400, according to aspects of the present disclosure.
[00129] In step 1430, the method 1400 includes controlling (e.g., with a processor) a position detection system, to obtain position data simultaneously as ultrasound image frames obtained during blind sweep protocol. For example, the position detection system can be or include a magnetic coil in a fixed magnetic field, a 3D camera, a plurality of 2D cameras, one or more ultrasonic depth finders, or other system capable of determining the position of the probe in 3D space.
[00130] In step 1440, the method 1400 includes determining, using the position data, locations of the ultrasound probe while obtaining the ultrasound image frames. In other words, the position data is representative of the positions of a plurality of positions of the ultrasound probe during the blind sweep protocol.
[00131] The steps 1430 and 1440 may be performed as part of the method 500 of Figure 5, e.g., instead of or in addition to steps 530 and 540.
[00132] The position detection system may for example be or include an IMU. However, in other aspects, a video feed of one or multiple cameras may be used to track the probe motion. Markers may be included on the probe to track its motion, or the intended path could be determined based on the images themselves. This may be accomplished with a variety of visual odometry/simultaneous localization and mapping (SLAM) approaches - for example, by tracking the centroid of the probe across the imaging field of known size at a known frame rate, or by using optical flow-based approaches. In another example, a neural network could be used to extract the probe's position. The network may be a convolutional neural network which is trained on many images of ultrasound probes with different known orientations moving in known directions on a marked surface. The network can learn the association between the visible markers in the consecutive video frames and probe position, or can learn based on human annotation of the video frames, indicating whether the probe position, angle, and velocity are compliant or non-compliant.
[00133] In still other aspects, the position detection system of step 1430 may be or include a processing controller (e.g., a neural network similar to the neural network 930 of Figure 9) might be trained on the acquired ultrasound images or external camera images and the corresponding IMU data for each successfully accomplished sweep. A variational encoder decoder architecture, such as an RNN, or TCN, could be used to automatically distinguish the deviations from the successful sweeps. The network may be fed raw ultrasound or external camera image frames, optical flow data derived from the images, or from some other preprocessing method. The IMU data may also be provided as a raw input, pre-processed in the manner previously described, or by some other method. The processing controller may be used to classify adherent vs. non-adherent frames/sweeps, or to output continuous estimates of motion parameters that are subsequently used to determine adherence in the thresholdbased manner previously described. [00134] In still other aspects, the position detection system of step 1430 may be or include other tracking sensors and systems (e.g., magnetic tracking sensors and systems) to track the probe position. For example, in the case of electromagnetic tracking, a fixed external magnetic field generator creates a local magnetic field. In conjunction, three orthogonal magnetic coils are installed in or attached to the ultrasound probe. Voltage is induced in the coils as they pass through the magnetic field, from which position and orientation with respect to the magnetic field generator may be determined.
[00135] In still other aspects, the detected anatomy may be used to track the probe position, using a neural network similar to neural network 930 of Figure 9, that has been trained for example using IMU data and ultrasound images of the patient anatomy, such that the trained neural network is able to determine a position in 3D space based on the ultrasound images (or the detected anatomy) alone.
[00136] As will be readily appreciated by those having ordinary skill in the art after becoming familiar with the teachings herein, the ultrasound sweep evaluation system advantageously permits untrained and minimally trained users to perform an ultrasound blind sweep protocol to gather anatomical images of high quality. This may result in higher accuracy and higher clinician trust in the results, while potentially decreasing the amount of time required to perform the sweeps.
[00137] The systems, methods, and devices described herein may be applicable in point of care and handheld ultrasound use cases such as with the Philips Lumify system. The ultrasound sweep evaluation system can be used for any handheld imaging applications, including but not limited to obstetrics, lung imaging, and echocardiography. The ultrasound sweep evaluation system could be deployed on handheld mobile ultrasound devices, and on portable or cart-based ultrasound systems. The ultrasound sweep evaluation system can be used in a variety of settings including emergency departments, ambulances, accident sites, and homes. The applications could also be expanded to other settings.
[00138] The functionality and output of the system may include the auditory, visual, and/or haptic feedback it provides in relation to the transducer motion. This invention improves the functioning of the ultrasound imaging system in the obstetrics context, especially for use by minimally trained users.
[00139] A number of variations are possible on the examples and aspects described above. For example, the systems, methods, and devices described herein are not limited to obstetric ultrasound applications. Rather, the same technology can be applied to images of other organs or anatomical systems such as the lungs, heart, brain, digestive system, vascular system, etc. Furthermore, the technology disclosed herein is also applicable to other medical imaging modalities where 3D data is available, such as other ultrasound applications, camerabased videos, X-ray videos, and 3D volume images, such as computer aided tomography (CT) scans, magnetic resonance imaging (MRI) scans, or optical coherence tomography (OCT) scans. The technology described herein can be used in a variety of settings.
[00140] Accordingly, the logical operations making up the aspects of the technology described herein are referred to variously as operations, steps, objects, layers, elements, components, algorithms, or modules. Furthermore, it should be understood that these may occur or be performed or arranged in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language.
[00141] All directional references e.g., upper, lower, inner, outer, upward, downward, left, right, lateral, front, back, top, bottom, above, below, vertical, horizontal, clockwise, counterclockwise, proximal, and distal are only used for identification purposes to aid the reader’s understanding of the claimed subject matter, and do not create limitations, particularly as to the position, orientation, or use of the ultrasound sweep guidance system. Connection references, e.g., attached, coupled, connected, joined, or “in communication with” are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily imply that two elements are directly connected and in fixed relation to each other. The term “or” shall be interpreted to mean “and/or” rather than “exclusive or.” The word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. Unless otherwise noted in the claims, stated values shall be interpreted as illustrative only and shall not be taken to be limiting.
[00142] The above specification, examples and data provide a complete description of the structure and use of exemplary aspects of the ultrasound sweep evaluation system as defined in the claims. Although various aspects of the claimed subject matter have been described above with a certain degree of particularity, or with reference to one or more individual aspects, those skilled in the art could make numerous alterations to the disclosed aspects without departing from the spirit or scope of the claimed subject matter.
[00143] Still other aspects are contemplated. It is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative only of particular aspects and not limiting. Changes in detail or structure may be made without departing from the basic elements of the subject matter as defined in the following claims.

Claims

CLAIMS What is claimed is:
1. A system, comprising: a processor configured for communication with an ultrasound probe, wherein the processor is configured to: receive a plurality of ultrasound image frames obtained by the ultrasound probe during a blind sweep protocol on a patient body, wherein the blind sweep protocol comprises a plurality of sweeps of the ultrasound probe on the patient body; receive position data representative of a plurality of positions of the ultrasound probe during the blind sweep protocol; detect an anatomical feature of the patient body in the plurality of image frames; perform registration between: the position data; and at least one of the detected anatomical feature or the plurality of ultrasound image frames; generate an anatomical mapping based on the registration; determine if the blind sweep protocol is incomplete based on at least one of the anatomical mapping, the registration, the detected anatomical feature, or the plurality of ultrasound image frames; and if the blind sweep protocol is incomplete, output user guidance associated with an additional sweep on the patient body by the ultrasound probe to obtain an additional plurality of ultrasound image frames, wherein the additional sweep is different than the plurality of sweeps of the blind sweep protocol.
2. The system of claim 1, wherein the processor is configured to determine at least one of a location, a direction, a length, or a duration for the additional sweep, based the anatomical mapping.
3. The system of claim 2, wherein the user guidance comprises a visual representation of at least one of the location, the direction, the length, or the duration for the additional sweep, and wherein the processor is configured to output the user guidance to a display in communication with the processor.
4. The system of claim 3, further comprising the display.
5. The system of claim 1, wherein the processor is configured to output the anatomical mapping to a display in communication with the processor.
6. The system of claim 4, wherein the user guidance is overlaid on the anatomical mapping.
7. The system of claim 1, wherein the user guidance comprises auditory feedback, wherein the processor is configured to output the user guidance to a speaker in communication with the processor.
8. The system of claim 7, further comprising the speaker.
9. The system of claim 1, wherein the user guidance comprises haptic feedback, wherein the processor is configured to output the haptic feedback to a haptic motor in communication with the processor.
10. The system of claim 9, further comprising the haptic motor, wherein the haptic motor is disposed within the ultrasound probe.
11. The system of claim 1, further comprising the ultrasound probe.
12. The system of claim 1, wherein the ultrasound probe comprises at least one of an accelerometer, a gyroscope, or a magnetometer disposed within the ultrasound probe.
13. The system of claim 12, wherein the processor is configured to receive the position data from at least one of the accelerometer, the gyroscope, or the magnetometer.
14. The system of claim 1, further comprising at least one of a camera or a magnetic coil configured to obtain the position data, wherein the processor is configured to receive the position data from at least one at least one of the camera or the magnetic coil.
15. The system of claim 1, wherein processor is configured to detect the anatomical feature using a trained machine learning model.
16. The system of claim 1, wherein, to determine if the blind sweep protocol is complete, the processor is configured to perform at least one of: a determination of a plurality of anatomical features is detected; a determination of whether a first bounding box associated with detection of the anatomical feature occurs at an edge of a respective image frame; a determination of whether a second bounding box associated with the detection of the anatomical feature occurs in an ultrasound image frame obtained at an end of a sweep in the blind sweep protocol; a determination of whether a third bounding box of the detection of the anatomical feature comprises a confidence that does not satisfy a first threshold, or a determination of whether a metric derived from image processing of the plurality of ultrasound image frames comprises a confidence that does not satisfy a second threshold.
17. The system of claim 1, wherein the anatomical feature comprises an anatomical feature of a fetus inside the patient body.
18. The system of claim 17, wherein, to determine if the blind sweep protocol is complete, the processor is configured to determine of whether a pre-defined anatomical plane for ultrasound evaluation of the fetus is detected.
19. A method, comprising: receiving, with a processor in communication with ultrasound probe, a plurality of ultrasound image frames obtained by the ultrasound probe during a blind sweep protocol on a patient body, wherein the blind sweep protocol comprises a plurality of sweeps of the ultrasound probe on the patient body; receiving, with the processor, position data representative of a plurality of positions of the ultrasound probe during the blind sweep protocol; detecting, with the processor, an anatomical feature of the patient body in the plurality of image frames; performing, with the processor, registration between: the position data; and at least one of the detected anatomical feature or the plurality of ultrasound image frames; generating, with the processor, an anatomical mapping based on the registration; determining, with the processor, if the blind sweep protocol is incomplete based on at least one of the anatomical mapping, the registration, the detected anatomical feature, or the plurality of ultrasound image frames; and if the blind sweep protocol is incomplete, outputting, with the processor, user guidance associated with an additional sweep on the patient body by the ultrasound probe to obtain an additional plurality of ultrasound image frames, wherein the additional sweep is different than the plurality of sweeps of the blind sweep protocol.
20. The method of claim 19, wherein the user guidance comprises a visual representation of at least one of a location, a direction, a length, or a duration for the additional sweep, wherein the visual representation is overlaid on the anatomical mapping.
PCT/EP2024/075707 2023-09-27 2024-09-14 Ultrasound imaging with follow up sweep guidance after blind sweep protocol Pending WO2025067903A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363540755P 2023-09-27 2023-09-27
US63/540,755 2023-09-27

Publications (1)

Publication Number Publication Date
WO2025067903A1 true WO2025067903A1 (en) 2025-04-03

Family

ID=92843299

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/075707 Pending WO2025067903A1 (en) 2023-09-27 2024-09-14 Ultrasound imaging with follow up sweep guidance after blind sweep protocol

Country Status (1)

Country Link
WO (1) WO2025067903A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200069285A1 (en) * 2018-08-31 2020-03-05 General Electric Company System and method for ultrasound navigation
US20220354466A1 (en) * 2019-09-27 2022-11-10 Google Llc Automated Maternal and Prenatal Health Diagnostics from Ultrasound Blind Sweep Video Sequences
US20220401062A1 (en) * 2019-11-21 2022-12-22 Koninklijke Philips N.V. Point-of-care ultrasound (pocus) scan assistance and associated devices, systems, and methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200069285A1 (en) * 2018-08-31 2020-03-05 General Electric Company System and method for ultrasound navigation
US20220354466A1 (en) * 2019-09-27 2022-11-10 Google Llc Automated Maternal and Prenatal Health Diagnostics from Ultrasound Blind Sweep Video Sequences
US20220401062A1 (en) * 2019-11-21 2022-12-22 Koninklijke Philips N.V. Point-of-care ultrasound (pocus) scan assistance and associated devices, systems, and methods

Similar Documents

Publication Publication Date Title
US12207917B2 (en) Adaptive ultrasound scanning
US11931201B2 (en) Device and method for obtaining anatomical measurements from an ultrasound image
KR20170007209A (en) Medical image apparatus and operating method for the same
US20250352186A1 (en) Ultrasound image-based identification of anatomical scan window, probe orientation, and/or patient position
US12329577B2 (en) Ultrasound imaging
WO2021099171A1 (en) Systems and methods for imaging screening
WO2025067903A1 (en) Ultrasound imaging with follow up sweep guidance after blind sweep protocol
US20240053470A1 (en) Ultrasound imaging with anatomy-based acoustic settings
EP4577120A1 (en) Guided ultrasound imaging for point-of-care staging of medical conditions
WO2025068084A1 (en) Ultrasound imaging with ultrasound probe guidance in blind sweep protocol
US20230316523A1 (en) Free fluid estimation
WO2025131898A1 (en) Low-lying placenta and/or placenta location in ultrasound imaging with blind sweep protocol
EP4516238A1 (en) Removal of motion-corrupted ultrasond images for optimal multibeat cardiac quantification
EP4327750A1 (en) Guided ultrasound imaging for point-of-care staging of medical conditions
EP4265191A1 (en) Ultrasound imaging
JP2025537465A (en) Automatic measurement point detection for anatomical measurements in anatomical images
CN120476423A (en) Multi-frame ultrasound video classification with video level features based on frame level detection
WO2024223599A1 (en) Guided cardiac ultrasound imaging to minimize apical foreshortening
WO2024046807A1 (en) Ultrasound video feature detection using learning from unlabeled data
WO2025026775A1 (en) Removal of motion-corrupted ultrasond images for optimal multibeat cardiac quantification
KR20160086126A (en) Ultrasonic diagnosing method and apparatus therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24773368

Country of ref document: EP

Kind code of ref document: A1