[go: up one dir, main page]

WO2025124940A1 - Systems and methods for imaging screening - Google Patents

Systems and methods for imaging screening Download PDF

Info

Publication number
WO2025124940A1
WO2025124940A1 PCT/EP2024/084209 EP2024084209W WO2025124940A1 WO 2025124940 A1 WO2025124940 A1 WO 2025124940A1 EP 2024084209 W EP2024084209 W EP 2024084209W WO 2025124940 A1 WO2025124940 A1 WO 2025124940A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
interest
regions
imaging
zone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/084209
Other languages
French (fr)
Inventor
Mohsen ZAHIRI
Sean Flannery
Goutam GHOSHAL
Hyeon Woo Lee
Balasundar Iyyavu Raju
Stephen Schmidt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Publication of WO2025124940A1 publication Critical patent/WO2025124940A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • A61B8/4254Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors mounted on the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B8/469Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means for selection of a region of interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0093Detecting, measuring or recording by applying one single type of energy and measuring its conversion into another type of energy
    • A61B5/0095Detecting, measuring or recording by applying one single type of energy and measuring its conversion into another type of energy by applying light and detecting acoustic waves, i.e. photoacoustic measurements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B8/468Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means allowing annotation or message recording
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5292Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters

Definitions

  • the present disclosure pertains to imaging systems and methods for verifying the accuracy of automatic classification of an imaging exam, more particularly for verifying Al classification of zones and objects of interest in an image.
  • Imaging may be used for a wide array of purposes, such as in medical diagnosis, monitoring, and research.
  • ultrasound exams are valuable for a wide variety of diagnostic purposes such as fetal development monitoring, cardiac valve health assessment, liver disease monitoring, and detecting internal bleeding.
  • Images may capture a particular field of view or zone of a subject, which may include one or more objects of interest such as organs.
  • objects of interest such as organs.
  • it may be useful to automatically label the zone and objects of interest represented by the image.
  • automatic classification tools such as Al models, may be prone to errors, such as misidentification. Accordingly, there may be a need to verify the accuracy of the automatic classification.
  • the present disclosure addresses the challenges of automatic classification of images.
  • An image may be collected by an imaging unit such as a probe and then one or more regions of interest (ROIs) in the image may be classified (for example by an Al model).
  • the system described herein includes an inertial measurement unit (IMU) in the imaging unit which generates position information during the imaging. Based on the position information, the ROIs classified by the Al model may be verified.
  • IMU inertial measurement unit
  • the present disclosure relates to an imaging system which includes an imaging unit and a processor.
  • the imaging unit acquires an image from a subject and includes an inertial measurement unit (IMU) which generates position information based on a position of the imaging unit.
  • IMU inertial measurement unit
  • the processor receives the image from the imaging unit, identifies one or more regions of interest in the image, determines a true zone of the subject based on the position information, compares the identified one or more regions of interest to expected regions of interest based on the true zone, and removes or changes the identification of the one or more regions of interest if they do not match the expected regions of interest.
  • the one or more regions of interest may represent organs and the expected regions of interest may be a list of organs expected to be visible in an image of the true zone.
  • the processor may implement a machine learning model which identifies the regions of interest based on the image.
  • the imaging system may be an ultrasound imaging system and the probe may include a transducer array.
  • the processor may also generate display data based on the image and a label for the determined true zone based on the position information.
  • the imaging system may include a display which displays the display data.
  • the IMU may include an accelerometer configured to generate the position information.
  • the present disclosure relates to a non-transitory computer readable medium encoded with instructions that when executed, cause an imaging system to classify one or more regions of interest in an image based on a machine learning model (MLM), determine a true imaging zone based on position information associated with the image, compare the classified one or more regions of interest to expected regions of interest based on the true imaging zone, and correct the classifications if the classified one or more regions of interest based on the MLM do not match the expected regions of interest based on the true imaging zone.
  • MLM machine learning model
  • the instructions when executed may cause the imaging system to correct the classifications by changing or removing the classified one or more regions of interest.
  • the instructions when executed may cause the imaging system to remove the classification of selected ones of the one or more regions of interest if the selected ones are not expected to appear in the true imaging zone.
  • the instructions when executed may cause the imaging system to generate display data based on the corrected classifications.
  • the present disclosure relates to a method which includes acquiring an ultrasound image from a subject with an ultrasound probe, acquiring position information associated with the ultrasound image based on an inertial measurement unit (IMU) in the ultrasound probe, classifying one or more regions of interest in the image with a machine learning model (MLM), and verifying the classification of the one or more regions of interest based on the position information.
  • IMU inertial measurement unit
  • MLM machine learning model
  • the method may include comparing the classified one or more regions of interest to expected regions of interest based on the position information, and updating the classification of the one or more regions of interest if the classification does not match the expected one or more regions of interest. .
  • the method may include removing a classification of selected ones of the one or more regions of interest if the imaging zone based on the position information does not match the imaging zone classified by the MLM.
  • the method may include changing a classification of selected ones of the one or more regions of interest if the imaging zone based on the position information does not match the imaging zone classified by the MLM.
  • the method may include changing a classification of one of the regions of interest from a first organ to a second organ based on a similarity between the appearance of the first organ and the second organ and the presence of the second organ but not the first organ in the verified classified imaging zone.
  • the method may include determining a true imaging zone based on the position information and verifying the classification based, in part, on the true imaging zone.
  • the method may include generating display data which includes the image with a label based on the verified classifications.
  • the method may include determining if the ultrasound probe is correctly positioned based on the position information.
  • the method may include determining that the ultrasound probe is upside down and flipping the ultrasound image if the ultrasound probe is upside down.
  • the method may include performing a focused assessment with sonography for trauma (FAST) exam with the ultrasound probe, where the ultrasound image is part of the FAST exam.
  • FAST sonography for trauma
  • FIG. 1 is a block diagram of an imaging system according to some embodiments of the present disclosure.
  • FIG. 2 shows a block diagram of an ultrasound imaging system 200 according to some embodiments of the present disclosure.
  • FIG. 3 is a block diagram illustrating an example processor 300 according to some embodiments of the present disclosure.
  • FIG 4 shows a block diagram of a process for training and deployment of a machine learning model according to some embodiments of the present disclosure.
  • FIG. 5 is a flow chart of an example workflow of an imaging system according to some embodiments of the present disclosure.
  • FIG. 6 is a set of diagrams which show examples of IMU position information at different patient positions according to some embodiments of the present disclosure.
  • FIGS. 7A-7C are a set of images of example classification and verification steps according to some embodiments of the present disclosure.
  • FIG. 8 is a flow chart of a method according to some embodiments of the present disclosure.
  • an imaging system may be positioned near a subject to capture images of a particular field of view or zone of the subject.
  • the imaging system may include a handheld probe which a user positions at different locations on the subject’s body in order to capture images of one or more different zones.
  • An automatic classification tool such as a trained Al or machine learning model (MLM) may automatically classify different regions or objects of interest (e.g., organs) within the image. Based on the regions of interest identified, the MLM may also determine a zone visible in the image.
  • MLM machine learning model
  • the Focused Assessment with Sonography for Trauma (FAST) exam is a rapid ultrasound exam conducted in trauma situations to assess patients for free- fluid.
  • Different zones e.g., region of the body
  • Zones typically include the right upper quadrant (RUQ), the left upper quadrant (LUQ), and the pelvis (SP).
  • Zones may further include the lung and heart.
  • Each zone may include one or more regions of interest (ROIs), which may be organs or particular views of organs.
  • a typical FAST exam includes images of the kidney, liver, liver tip, diaphragm, spleen, kidney-liver interface, diaphragm-liver interface, and volume fanning acquired from the RUQ zone.
  • a subxiphoid view of the heart is acquired. Not every region of interest may be visible in every zone.
  • the RUQ will generally not include a view of the spleen.
  • the FAST exam is an important ultrasound test in trauma situations to quickly assess patients for free-fluid (e.g., blood due to internal hemorrhaging).
  • the test aids in determining the severity of the injury, allowing for timely and accurate decision-making in patient care.
  • Adequate image quality is particularly important in the FAST exam as it is primarily used in critical care situations, where even small amounts of free-fluid can indicate a significant injury.
  • Having a good understanding of the shape and location of organs in each zone can help inexperienced physicians/sonographers obtain high-quality images. Understanding the anatomy of each zone (e.g., Right Upper Quadrant, Left Upper Quadrant) can guide data acquisition and improve the quality of images obtained during the exam. Therefore, knowledge about the zones and organs is important to achieving optimal data acquisition in the FAST exam, which is necessary for accurate diagnosis and treatment.
  • An imaging system of the present disclosure includes a probe which includes an inertial modeling unit (IMU).
  • the IMU collects positioning data related to a position of the probe while the imaging system is collecting an image.
  • a trained MLM automatically classifies or identifies one or more regions of interest in the image.
  • the imaging system verifies the classification performed by the MLM and, if needed, corrects the classification. For example based on the position information from the IMU, the system may determine a ‘true’ imaging zone. Based on that true imaging zone, the system may include a list of expected regions of interest in the imaging zone (e.g., based on anatomy). The expected list of ROIs is compared to the MLM classified ROIs to identify errors..
  • the correction may include changing a label of which zone the image represents, removing an incorrect label from a misidentified ROI, changing a label of a misidentified ROI, or combinations thereof.
  • the imaging system may generate display information (e.g., a labelled figure) based on the corrected classifications.
  • the imaging system may be an ultrasound system, and may include a handheld probe used to conduct a FAST exam.
  • the handheld probe includes an IMU which collects position information such as the rotational orientation of the handheld probe.
  • An ultrasound image of a target zone is collected by a positioning the handheld probe in a particular location relative to the subject.
  • An MLM automatically identifies any ROIs within the zone based on the image. From those identified ROIs the MLM may also identify a preliminary zone (e.g., based on which organs were classified as being part of the image).
  • a processor of the ultrasound system uses the orientation information from the IMU to determine the true zone.
  • FIG. 1 is a block diagram of an imaging system according to some embodiments of the present disclosure.
  • the imaging system 100 includes an imaging unit 110 and a computing unit 150.
  • the imaging unit 110 may be positioned about a subject 102 in order to collect one or more images of the subject 102. For example, two positions of the imaging unit 110, Position A and Position B, are shown.
  • the imaging unit 110 is communicatively coupled to the computing unit 150 and the imaging unit 110 provides imaging data and position information to the computing unit 150.
  • the computing unit 150 generates images 166 based on the imaging data from the imaging unit 110, classifies the images 166 based on a trained MLM 162, and verifies (and if needed corrects) the classification based on position information 164 received from an IMU of the imaging unit 110.
  • the imaging modality 114 may include a transducer array, which directs ultrasound into the subject 102 and measures reflected sound energy (e.g., echoes) received from the subject 102, which can be used by the imaging system 100 to generate an image.
  • the imaging unit 110 may represent a handheld unit such as a handheld probe, which may be positioned on or near the subject 102 (e.g., in contact with the subject’s skin) for imaging regions of interest, such as organs 104 and 106, within a zone of the subject 102.
  • the computing unit 150 includes a computer readable memory 160 which may be loaded with one or more instructions 170.
  • the computing unit 150 includes one or more processors 152 which operate the instructions 170.
  • the computing unit 150 also includes a communication module 154 to enable communications with the imaging unit 110 and/or with external systems (e.g., a data server, the internet, local networks, etc.), an input system 156 which allows a user to interact with the imaging system 100, and a display 158 which presents information to the user.
  • the input system 156 may include a mouse, keyboard, touch screen, or combinations thereof.
  • the memory 160 stores a set of instructions 170, as well as various other types of information which may be useful.
  • the memory 170 may store a trained machine learning model 162, position information 164 acquired from the IMU 112, and images 166 generated from the imaging data generated by the imaging modality 114.
  • the memory 160 may also store information such as position matching information 165, which determines an imaging position of the imaging unit 110 based on the position information 164, and an ROI model 168 (or expected ROI information), which may indicate which types of ROIs are present in which imaging zones.
  • the computing unit 150 may perform one or more processing steps to generate the images 166.
  • the imaging system is an ultrasound system
  • the imaging data may represent raw signals from an ultrasound transducer, and processing may be used to generate images based on those raw signals.
  • the position information 164 is collected from the IMU 112.
  • the position information 164 and images 166 may be associated with each other.
  • both the position information 164 and images 166 may be time-stamped, and the images 166 may be associated with the position information 164 collected at or around the time the image was collected. For example each image may be matched to a set of position information.
  • the instructions 170 include instruction 172, which describes classifying a region of interest in an image based on the MLM 162.
  • the MLM 162 may be trained to recognize regions of interest within the images. For example, the MLM 162 may be trained based on previously recorded images which have been labelled. In some embodiments, the MLM 162 may make an initial zone determination based on the classified ROIs in the image.
  • the MLM 162 may be trained to recognize regions of interest, such as organs, within the view. For example the MLM 162 may identify organs such as the liver, diaphragm, kidney, and spleen when they are present within the image.
  • the instructions 170 include instruction 174, which describes determining the imaged zone based on position information from the IMU 112.
  • the memory 160 may store position matching information 165 which determines which zone is imaged based on the position information 164. For example, in the example positions shown, a first zone may be imaged from Position A, while a second zone may be imaged from Position B. To change position between Position A and Position B, there is a 90° rotation in the XY plane and a change in location in the -Y and -X directions.
  • the position information may reflect one or more of these changes and the position matching information 165 may use one or more of those changes to determine which imaging position is reflected in the position information 164.
  • the position matching information 165 may set up one or more criteria based on which axis gravity appears along (and which direction it is pointing relative to the accelerometer) to determine the orientation of the imaging unit 110, which in turn may determine the position of the imaging unit 110 relative to the subject and thus the zone being imaged.
  • the imaging unit 110 may indicate that the imaging unit 110 is in Position A, but if gravity primarily appears in an axis oriented horizontally with respect to the imaging unit 110, it may indicate that the imaging unit 110 is in Position B.
  • the instructions 170 include instruction 176 which describes verifying (and if not verified, correcting) the classification performed by the MLM 162 based on the zone determined based on the position information 164.
  • the zone determined based on the IMU 112 position information 164 may generally be considered to be the ‘true’ zone since it is based off of measured information about position of the imaging unit 110 during imaging.
  • the memory 160 may include an ROI model 168, which may include a library of what types of ROIs should be visible in different zones. Based on the true zone identified based on the position information, a list of expected ROIs may be retrieved from the ROI model 168 and compared to the ROIs classified by the MLM 162. If the classified ROIs do not match the expected ROIs, then the classified ROIs may have their classifications removed or corrected.
  • the two example ROIS 104 and 106 may both be expected to be seen from images collected from Position A, however only ROI 104 may be expected to be seen from Position B.
  • the ROI model 168 may be generated based on the known anatomy of the different zones which are imaged in a subject. If the MLM determined zone is not verified and its corrected to the true zone, the ROI model 168 may be compared to the ROIs classified by the MLM to determine if any of the classifications should be changed. For example if the ROI classifies an object as being ROI 106, but the computing unit 150 has determined that the imaged zone is from Position B, then the classification of an object as ROI 106 may be determined to be incorrect, and may be corrected.
  • the classification of the incorrect ROI may be removed and that ROI may go unclassified in the verified image.
  • the classification of the ROI may be changed from the incorrect classification to a different classification. For example, if it is known that the MLM 162 is prone to misclassifying ROI 104 as ROI 106 from Position B (e.g., because a similar appearance), then the classification of ROI 106 may be changed to ROI 104 in the verified image.
  • the imaging system 100 may present information to the user via the display 158.
  • the imaging system 100 may apply labels to the image 166 which indicate the zone being imaged as well as any classified ROIs present in the image (as determined by the MLM 162 and corrected by the position information 164).
  • the imaging system 150 may present labels on pre-recording images 166.
  • the imaging system 150 may stream the images 166 ‘live’ onto the display and may classify those images and verify and correct in real-time or close to real-time.
  • the imaging system 150 may present additional feedback to the user. For example, if the IMU 112 data indicates that the imaging unit 110 is being held backwards (e.g., upside down) but is otherwise positioned correctly, the imaging system 100 may prompt the user to correct the positioning of the imaging unit 110. For example, if the IMU 112 includes a 3 -axis accelerometer, then if it is being held upside down, its position information may match one of the known patterns of position information in the position matching information 165, except that the signals are inverted (e.g., because the probe 110 and thus the accelerometer are upside down relative to the expected gravity vector). The imaging system 100 may display a message, sound an alert and/or provide other feedback to prompt the user to correct the positioning before the imaging continues.
  • the imaging system 100 may display a message, sound an alert and/or provide other feedback to prompt the user to correct the positioning before the imaging continues.
  • the imaging system in addition to, or instead of prompting the user, the imaging system
  • FIG. 2 shows a block diagram of an ultrasound imaging system 200 according to some embodiments of the present disclosure.
  • the ultrasound imaging system 200 may, in some embodiments, implement the imaging system 100 of Figure 1.
  • the ultrasound imaging system 200 includes a probe 212 (e.g., imaging unit 110 of Figure 1) and processing circuitry 250 (e.g., computing unit 150 of Figure 1).
  • An ultrasound imaging system 200 may include a transducer array 214, which may be included in an ultrasound probe 212, for example an external probe or an internal probe such as an intravascular ultrasound (IVUS) catheter probe.
  • the transducer array 214 may be in the form of a flexible array configured to be conformally applied to a surface of subject to be imaged (e.g., patient).
  • the transducer array 214 is configured to transmit ultrasound signals (e.g., beams, waves) and receive echoes responsive to the ultrasound signals.
  • ultrasound signals e.g., beams, waves
  • a variety of transducer arrays may be used, e.g., linear arrays, curved arrays, or phased arrays.
  • the transducer array 214 can include a two dimensional array (as shown) of transducer elements capable of scanning in both elevation and azimuth dimensions for 2D and/or 3D imaging.
  • the axial direction is the direction normal to the face of the array (in the case of a curved array the axial directions fan out)
  • the azimuthal direction is defined generally by the longitudinal dimension of the array
  • the elevation direction is transverse to the azimuthal direction.
  • the ultrasound probe 212 includes an IMU 213 (e.g., 112 of Figure 1), which records information about the position of the probes 212 and provides that information as a stream of data which can be stored as position information.
  • the IMU 213 may include a 3 -axis accelerometer, and may provide position information which includes 3 channels, each of which represents the component of an acceleration vector relative to one of the three axes.
  • the transducer array 214 may be coupled to a microbeamformer 216, which may be located in the ultrasound probe 212, and which may control the transmission and reception of signals by the transducer elements in the array 214.
  • the microbeamformer 216 may control the transmission and reception of signals by active elements in the array 214 (e.g., an active subset of elements of the array that define the active aperture at any given time).
  • the microbeamformer 216 may be coupled, e.g., by a probe cable or wirelessly, to a transmit/receive (T/R) switch 218, which switches between transmission and reception and protects the main beamformer 222 from high energy transmit signals.
  • T/R transmit/receive
  • the T/R switch 218 and other elements in the system can be included in the ultrasound probe 212 rather than in the ultrasound system base, which may house the image processing electronics.
  • An ultrasound system base typically includes software and hardware components including circuitry for signal processing and image data generation as well as executable instructions for providing a user interface.
  • the transmission of ultrasonic signals from the transducer array 214 under control of the microbeamformer 216 is directed by the transmit controller 220, which may be coupled to the T/R switch 218 and a main beamformer 222.
  • the transmit controller 220 may control the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 214, or at different angles for a wider field of view.
  • the transmit controller 220 may also be coupled to a user interface 224 and receive input from the user's operation of a user control.
  • the user interface 224 may include one or more input devices such as a control panel 252, which may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
  • a control panel 252 may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
  • the signal processor 226 may be configured to process the received beamformed RF data in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation.
  • the signal processor 226 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination.
  • the processed signals (also referred to as I and Q components or IQ signals) may be coupled to additional downstream signal processing circuits for image generation.
  • the IQ signals may be coupled to a number of signal paths within the system, each of which may be associated with a specific arrangement of signal processing components suitable for generating different types of image data (e.g., B-mode image data, Doppler image data).
  • the system may include a B-mode signal path 258 which couples the signals from the signal processor 226 to a B-mode processor 228 for producing B-mode image data.
  • the B-mode processor can employ amplitude detection for the imaging of structures in the body.
  • the signals produced by the B-mode processor 228 may be coupled to a scan converter 230 and/or a multiplanar reformatter 232.
  • the scan converter 230 may be configured to arrange the echo signals from the spatial relationship in which they were received to a desired image format. For instance, the scan converter 230 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal or otherwise shaped three dimensional (3D) format.
  • the multiplanar reformatter 232 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image (e.g., a B-mode image) of that plane, for example as described in U.S. Pat. No. 6,443,896 (Detmer).
  • the scan converter 230 and multiplanar reformatter 232 may be implemented as one or more processors in some embodiments.
  • a volume Tenderer 234 may generate an image (also referred to as a projection, render, or rendering) of the 3D dataset as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.).
  • the volume Tenderer 234 may be implemented as one or more processors in some embodiments.
  • the volume Tenderer 234 may generate a render, such as a positive render or a negative render, by any known or future known technique such as surface rendering and maximum intensity rendering.
  • the system may include a Doppler signal path 262 which couples the output from the signal processor 226 to a Doppler processor 260.
  • the Doppler processor 260 may be configured to estimate the Doppler shift and generate Doppler image data.
  • the Doppler image data may include color data which is then overlaid with B-mode (i.e. grayscale) image data for display.
  • B-mode i.e. grayscale
  • the Doppler processor 260 may be configured to filter out unwanted signals (i.e., noise or clutter associated with non-moving tissue), for example using a wall filter.
  • the Doppler processor 260 may be further configured to estimate velocity and power in accordance with known techniques.
  • the Doppler processor may include a Doppler estimator such as an autocorrelator, in which velocity (Doppler frequency) estimation is based on the argument of the lag- one autocorrelation function and Doppler power estimation is based on the magnitude of the lagzero autocorrelation function.
  • Motion can also be estimated by known phase-domain (for example, parametric frequency estimators such as MUSIC, ESPRIT, etc.) or time-domain (for example, cross-correlation) signal processing techniques.
  • Other estimators related to the temporal or spatial distributions of velocity such as estimators of acceleration or temporal and/or spatial velocity derivatives can be used instead of or in addition to velocity estimators.
  • the velocity and power estimates may undergo further threshold detection to further reduce noise, as well as segmentation and post-processing such as filling and smoothing.
  • the velocity and power estimates may then be mapped to a desired range of display colors in accordance with a color map.
  • the color data also referred to as Doppler image data, may then be coupled to the scan converter 130, where the Doppler image data may be converted to the desired image format and overlaid on the B-mode image of the tissue structure to form a color Doppler or a power Doppler image.
  • the classification/verification processor 270 may include any one or more machine learning, artificial intelligence algorithms, and/or multiple neural networks, collectively referred to as machine learning models (MLM) 272 (e.g., MLM 162 of Figure 1).
  • MLM 272 may include a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), an autoencoder neural network, or the like.
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the MLM 272 may be implemented in hardware (e.g., neurons are represented by physical components) and/or software (e.g., neurons and pathways implemented in a software application) components.
  • the MLM 272 implemented according to the present disclosure may use a variety of topologies and algorithms for training the MLM 272 to produce the desired output.
  • a softwarebased neural network may be implemented using a processor (e.g., single or multi-core CPU, a single GPU or GPU cluster, or multiple processors arranged for parallel-processing) configured to execute instructions, which may be stored in computer readable medium, and which when executed cause the processor to perform a trained algorithm for classification of an organ, anatomical feature(s), and/or a view of an ultrasound image (e.g., an ultrasound image received from the scan converter 230).
  • the processor may perform a trained algorithm for identifying a zone and/or quality of an ultrasound image.
  • the MLM 272 may be trained using any of a variety of currently known or later developed learning techniques to obtain a neural network (e.g., a trained algorithm or hardware-based system of nodes) that is configured to analyze input data in the form of ultrasound images, measurements, and/or statistics.
  • the MLM 272 may be statically trained. That is, the MLM may be trained with a data set and deployed on the classification/verification processor 270.
  • the MLM 272 may be dynamically trained. In these embodiments, the MLM 272 may be trained with an initial data set and deployed on the completeness processor 270. However, the MLM 272 may continue to train and be modified based on ultrasound images acquired by the system 200 after deployment of the MLM 272 on the completeness processor 270.
  • the classification/verification processor 270 may not include a MLM 272 and may instead implement other image processing techniques for feature recognition and/or quality detection such as image segmentation, histogram analysis, edge detection or other shape or object recognition techniques.
  • the completeness processor 270 may implement the MLM 272 in combination with other image processing methods.
  • the MLM 272 and/or other elements may be selected by a user via the user interface 224.
  • Outputs from the classification/verifi cation processor 270, the scan converter 230, the multiplanar reformatter 232, and/or the volume Tenderer 234 may be coupled to an image processor 236 for further enhancement, buffering and temporary storage before being displayed on an image display 238.
  • output from the scan converter 230 is shown as provided to the image processor 236 via the classification/verifi cation processor 270, in some embodiments, the output of the scan converter 230 may be provided directly to the image processor 236.
  • a graphics processor 240 may generate graphic overlays for display with the images.
  • the classification/verifi cation processor 270 may provide display data for the zone of the current image as well as the types of ROIs present in the image.
  • the graphics processor 240 may overlay these (verified or corrected) classifications over the image on the display 238. For example, the classified ROIs may be displayed as labels over the region of the image identified as the ROI.
  • the system 200 may include local memory 242.
  • Local memory 242 may be implemented as any suitable non-transitory computer readable medium (e.g., flash drive, disk drive).
  • Local memory 242 may store data generated by the system 200 including ultrasound images, position information from the IMU 213, executable instructions, imaging parameters, training data sets, or any other information necessary for the operation of the system 200.
  • the local memory 242 may store executable instructions in a non-transitory computer readable medium that may be executed by the classification/verification processor 270.
  • the local memory 242 may store ultrasound images and/or videos responsive to instructions from the classification/verification processor 270.
  • local memory 242 may store other outputs of the classification/verification processor 270, such as the (verified and/or corrected) identification of ROIs in the image.
  • User interface 224 may include display 238 and control panel 252.
  • the display 238 may include a display device implemented using a variety of known display technologies, such as LCD, LED, OLED, or plasma display technology. In some embodiments, display 238 may include multiple displays.
  • the control panel 252 may be configured to receive user inputs (e.g., exam type, information calculated by and/or displayed from the classification/verification processor 270).
  • the control panel 252 may include one or more hard controls (e.g., buttons, knobs, dials, encoders, mouse, trackball or others).
  • control panel 252 may additionally or alternatively include soft controls (e.g., GUI control elements or simply, GUI controls) provided on a touch sensitive display.
  • soft controls e.g., GUI control elements or simply, GUI controls
  • display 238 may be a touch sensitive display that includes one or more soft controls of the control panel 252.
  • various components shown in FIG. 2 may be combined.
  • classification/verification processor 270, image processor 236 and graphics processor 240 may be implemented as a single processor.
  • various components shown in FIG. 2 may be implemented as separate components.
  • signal processor 226 may be implemented as separate signal processors for each imaging mode (e.g., B-mode, Doppler).
  • one or more of the various processors shown in FIG. 2 may be implemented by general purpose processors and/or microprocessors configured to perform the specified tasks.
  • one or more of the various processors may be implemented as application specific circuits.
  • one or more of the various processors (e.g., image processor 136) may be implemented with one or more graphical processing units (GPU).
  • GPU graphical processing units
  • the processor 300 may include one or more registers 312 communicatively coupled to the core 302.
  • the registers 312 may be implemented using dedicated logic gate circuits (e.g., flipflops) and/or any memory technology. In some embodiments the registers 312 may be implemented using static memory. The register may provide data, instructions and addresses to the core 302.
  • the bus 316 may be coupled to one or more external memories.
  • the external memories may include Read Only Memory (ROM) 332.
  • ROM 332 may be a masked ROM, Electronically Programmable Read Only Memory (EPROM) or any other suitable technology.
  • the external memory may include Random Access Memory (RAM) 333.
  • RAM 333 may be a static RAM, battery backed up static RAM, Dynamic RAM (DRAM) or any other suitable technology.
  • the external memory may include Electrically Erasable Programmable Read Only Memory (EEPROM) 335.
  • the external memory may include Flash memory 334.
  • the external memory may include a magnetic storage device such as disc 336.
  • the external memories may be included in a system, such as ultrasound imaging system 100 of Figure 1 and/or 200 shown in Fig. 2.
  • local memory 242 may include one or more of ROM 332, RAM 333, EEPROM 335, flash 334, and/or disc 336.
  • one or more processors may execute computer readable instructions encoded on one or more of the memories (e.g., memories 160, 242, 332, 333, 335, 334, and/or 336).
  • processor 300 may be used to implement one or more processors of an ultrasound imaging system, such as ultrasound imaging system 200.
  • the memory encoded with the instructions may be included in the ultrasound imaging system, such as local memory 242.
  • the processor and/or memory may be in communication with one another and the ultrasound imaging system, but the processor and/or memory may not be included in the ultrasound imaging system. Execution of the instructions may cause the ultrasound imaging system to perform one or more functions.
  • FIG. 4 shows a block diagram of a process for training and deployment of a machine learning model according to some embodiments of the present disclosure.
  • the process shown in FIG. 3 may be used to train the MLM 162 of Figure 1 and/or 272 of Figure 2 included in the classification/verification processor 270.
  • the left hand side of FIG. 4, phase 1, illustrates the training of a MLM.
  • training sets which include multiple instances of input arrays and output classifications may be presented to the training algorithm(s) of the MLM (e.g., Al exNet training algorithm, as described by Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional Neural Networks,” NIPS 2012 or its descendants).
  • Training may involve the selection of a starting (blank) architecture 412 and the preparation of training data 414.
  • the starting architecture 412 may be an architecture (e.g., an architecture for a neural network with defined layers and arrangement of nodes but without any previously trained weights) or a partially trained network, such as the inception networks, which may then be further tailored for classification of ultrasound images.
  • the starting architecture 412 e.g., blank weights
  • training data 414 are provided to a training engine 410 for training the MLM.
  • the model 420 Upon sufficient number of iterations (e.g., when the MLM performs consistently within an acceptable error), the model 420 is said to be trained and ready for deployment, which is illustrated in the middle of FIG. 4, phase 2.
  • the trained model 420 is applied (via inference engine 430) for analysis of new data 432, which is data that has not been presented to the model 420 during the initial training (in phase 1).
  • the new data 432 may include unknown images such as ultrasound images acquired during a scan of a patient (e.g., torso images acquired from a patient during a FAST exam).
  • the trained model 420 implemented via engine 430 is used to classify the unknown images in accordance with the training of the model 420 to provide an output 434 (e.g., which anatomical features are included in the image, what zone the image was acquired from, or a combination thereof).
  • the workflow 500 shows an example image 510 which is collected by an ultrasound system.
  • the image 510 may represent a ‘raw’ image, after it has been processed but before it has been classified/verified, for example the image 510 may be the output of the image processor 236 to the classification/verification processor 270 of Figure 2.
  • the image 510 is provided as an input to the block 520, which represents the operation of a classification/verification processor.
  • Block 520 includes block 522, which describes detecting and classifying organs in the ultrasound image 510 and block 524 which describes verifying the detection/classifi cation based on position information from an IMU such as 112 of Figure 1 and/or 213 of Figure 2.
  • the imaging system may generate labelled image 530, which represents the image of 510, but with labels applied to represent the (verified) classified zone and organs within the image.
  • a label “RUQ” may be displayed on the labelled image 530 to indicate that the image has been classified as a view of the RUQ zone, and the labels liver, diaphragm, and kidney are displayed over the regions which the imaging system has classified as being those organs.
  • the labels displayed in the labelled image 530 have been verified based on the zone determined by the IMU data. For example, the IMU data may agree with the MLM that this is a view of the RUQ, and thus it is appropriate for those three organs to appear.
  • the steps of the workflow 500 may represent steps that are repeated each time a new image is captured.
  • the user may be presented with labeled images 530 in real-time or close to real-time (e.g., without undue delay by the processor).
  • the labelled images 530 may be updated at around a video rate.
  • the IMU signal may be unique for each of the three imaging positions represented by diagrams 610-630, although the position of the probe relative to the subject may remain constant.
  • the imaging system may have a set of criteria to determine which zone is being imaged based on the expected IMU signals from positions and different zones. Data sets such as those shown in FIG. 6 may be used to develop criteria (e.g., position matching information such as 165 of Figure 1) which may be used to determine which zone is being imaged based on the position information from the IMU.
  • FIGS. 7A-7C are a set of images of example classification and verification steps according to some embodiments of the present disclosure.
  • FIGS. 7A-7C show a number of images which represent different example steps in image acquisition, classification and verification by an imaging system such as 100 of Figure 1 and/or 200 of Figure 2.
  • Figure 7A shows an image 710a which is classified in order to generate a labelled classified image 720a.
  • the classification by the MLM has classified the liver, diaphragm and kidney within the image and thus determined that this is a view of the RUQ.
  • the graph of position information 730a shows position information which matches the imaging of the RUQ. Accordingly, the classification by the MLM may be verified, and the verified image 740a matches the classified image 720a since no changes are needed.
  • FIG. 8 is a flow chart of a method according to some embodiments of the present disclosure.
  • the method 800 may, in some embodiments, be implemented by one or more of the systems and apparatuses described herein, such as by the imaging system 100 of Figure 1 and/or the ultrasound imaging system 200 of Figure 2.
  • the method 800 may be implemented in hardware, software, or a combination thereof.
  • the method 800 may represent instructions loaded into non- transitory computer readable memory such as memory 160 of Figure 1, 242 of Figure 2, and/or 332-336 of Figure 3 and executed by a processor such as 152 of Figure 1, 240/236/272 of Figure 2 and/or 300 of Figure 3.
  • the steps of boxes 810 and 820 may happen more or less simultaneously with each other.
  • the method 800 may include measuring the position information while acquiring the image.
  • Box 820 may be followed by box 830 which describes classifying one or more regions of interest in the image with an MLM (e.g., 162 of Figure 1 and/or 272 of Figure 2).
  • the method 800 may include classifying one or more regions of the image as an organ or part thereof.
  • Box 830 may be followed by box 840, which describes verifying the classified regions of interest based on the position information.
  • the method 800 may include determining a true imaging zone based on the position information and comparing expected regions of interest based on the true imaging zone to the ROIs classified by the MLM. If the classified and expected ROIs match, then the classification of the ROIs may be verified and the classification may be unchanged. If the expected and classified ROIs do not match then the method may include changing or removing the classification of the ROIs.
  • the method 800 may include comparing the classified ROIs to an ROI model (e.g., 168) and removing or changing any classifications of types of ROIs which are not expected in the true imaging zone.
  • an ROI model e.g., 168
  • the method 800 may include changing a classification of one of the regions of interest from a first organ to a second organ based on a similarity between the appearance of the first organ and the second organ and the presence of the second organ but not the first organ in the verified classified imaging zone. For example, a classification of liver in the MLM classified RUQ zone, may be changed to a classification of spleen in the IMU determined true LUQ zone.
  • the method 800 may include generating display data which includes the image with a label based on the verified imaging zone.
  • the display data may also include labels for the verified ROIs in the image.
  • the method 800 may include determining if the imaging unit is correctly position based on the position information. For example, the method 800 may include determining if the imaging prove is upside down or not. In some embodiments, the method 800 may include alerting a user (e.g., via a tone, a display, or combinations thereof) if the imaging unit is not in the correct position. In some embodiments, the method 800 may include applying a transformation to the images before providing the images to the MLM for classification. For example, the method 800 may include flipping the image if the imaging unit is upside down.
  • any ultrasound exam that has a set of standard images, videos, measurements, or a combination thereof, associated with the exam may utilize the features of the present disclosure.
  • the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein.
  • the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
  • processors described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention.
  • the functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
  • ASICs application specific integrated circuits
  • the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

An imaging system may include a probe used to collect an image from a subject. The probe includes an inertial monitoring unit (IMU) which generates position information associated with a position of the probe when the image is collected. A processor classifies one or more regions of interest in the image. The classified one or more regions of interest are verified based on the position information. For example, the position information may be used to determine a true imaging zone of the image, and the classified regions of interest may be compared to a list of expected regions of interest in the true imaging zone.

Description

SYSTEMS AND METHODS FOR IMAGING SCREENING
GOVERNMENT INTEREST
[001] This invention was made with United States government support awarded by the United States Department of Health and Human Services under the grant number HHS/ASPR/BARDA 75A50120C00097. The United States has certain rights in this invention.
TECHNICAL FIELD
[002] The present disclosure pertains to imaging systems and methods for verifying the accuracy of automatic classification of an imaging exam, more particularly for verifying Al classification of zones and objects of interest in an image.
BACKGROUND
[003] Imaging may be used for a wide array of purposes, such as in medical diagnosis, monitoring, and research. For example, ultrasound exams are valuable for a wide variety of diagnostic purposes such as fetal development monitoring, cardiac valve health assessment, liver disease monitoring, and detecting internal bleeding. Images may capture a particular field of view or zone of a subject, which may include one or more objects of interest such as organs. In order to accurately determine information about the image, it may be useful to automatically label the zone and objects of interest represented by the image. However, automatic classification tools, such as Al models, may be prone to errors, such as misidentification. Accordingly, there may be a need to verify the accuracy of the automatic classification.
SUMMARY
[004] The present disclosure addresses the challenges of automatic classification of images. An image may be collected by an imaging unit such as a probe and then one or more regions of interest (ROIs) in the image may be classified (for example by an Al model). The system described herein includes an inertial measurement unit (IMU) in the imaging unit which generates position information during the imaging. Based on the position information, the ROIs classified by the Al model may be verified. [005] In at least one aspect, the present disclosure relates to an imaging system which includes an imaging unit and a processor. The imaging unit acquires an image from a subject and includes an inertial measurement unit (IMU) which generates position information based on a position of the imaging unit. The processor receives the image from the imaging unit, identifies one or more regions of interest in the image, determines a true zone of the subject based on the position information, compares the identified one or more regions of interest to expected regions of interest based on the true zone, and removes or changes the identification of the one or more regions of interest if they do not match the expected regions of interest.
[006] The one or more regions of interest may represent organs and the expected regions of interest may be a list of organs expected to be visible in an image of the true zone. The processor may implement a machine learning model which identifies the regions of interest based on the image. The imaging system may be an ultrasound imaging system and the probe may include a transducer array.
[007] The processor may also generate display data based on the image and a label for the determined true zone based on the position information. The imaging system may include a display which displays the display data. The IMU may include an accelerometer configured to generate the position information.
[008] In at least one aspect, the present disclosure relates to a non-transitory computer readable medium encoded with instructions that when executed, cause an imaging system to classify one or more regions of interest in an image based on a machine learning model (MLM), determine a true imaging zone based on position information associated with the image, compare the classified one or more regions of interest to expected regions of interest based on the true imaging zone, and correct the classifications if the classified one or more regions of interest based on the MLM do not match the expected regions of interest based on the true imaging zone.
[009] The instructions when executed may cause the imaging system to correct the classifications by changing or removing the classified one or more regions of interest. The instructions when executed may cause the imaging system to remove the classification of selected ones of the one or more regions of interest if the selected ones are not expected to appear in the true imaging zone. The instructions when executed may cause the imaging system to generate display data based on the corrected classifications. [010] In at least one aspect, the present disclosure relates to a method which includes acquiring an ultrasound image from a subject with an ultrasound probe, acquiring position information associated with the ultrasound image based on an inertial measurement unit (IMU) in the ultrasound probe, classifying one or more regions of interest in the image with a machine learning model (MLM), and verifying the classification of the one or more regions of interest based on the position information.
[Oil] The method may include comparing the classified one or more regions of interest to expected regions of interest based on the position information, and updating the classification of the one or more regions of interest if the classification does not match the expected one or more regions of interest. . The method may include removing a classification of selected ones of the one or more regions of interest if the imaging zone based on the position information does not match the imaging zone classified by the MLM. The method may include changing a classification of selected ones of the one or more regions of interest if the imaging zone based on the position information does not match the imaging zone classified by the MLM. The method may include changing a classification of one of the regions of interest from a first organ to a second organ based on a similarity between the appearance of the first organ and the second organ and the presence of the second organ but not the first organ in the verified classified imaging zone.
[012] The method may include determining a true imaging zone based on the position information and verifying the classification based, in part, on the true imaging zone..
[013] The method may include generating display data which includes the image with a label based on the verified classifications. The method may include determining if the ultrasound probe is correctly positioned based on the position information. The method may include determining that the ultrasound probe is upside down and flipping the ultrasound image if the ultrasound probe is upside down. The method may include performing a focused assessment with sonography for trauma (FAST) exam with the ultrasound probe, where the ultrasound image is part of the FAST exam.
BRIEF DESCRIPTION OF THE DRAWINGS
[014] FIG. 1 is a block diagram of an imaging system according to some embodiments of the present disclosure. [015] FIG. 2 shows a block diagram of an ultrasound imaging system 200 according to some embodiments of the present disclosure.
[016] FIG. 3 is a block diagram illustrating an example processor 300 according to some embodiments of the present disclosure.
[017] FIG 4 shows a block diagram of a process for training and deployment of a machine learning model according to some embodiments of the present disclosure.
[018] FIG. 5 is a flow chart of an example workflow of an imaging system according to some embodiments of the present disclosure.
[019] FIG. 6 is a set of diagrams which show examples of IMU position information at different patient positions according to some embodiments of the present disclosure.
[020] FIGS. 7A-7C are a set of images of example classification and verification steps according to some embodiments of the present disclosure.
[021] FIG. 8 is a flow chart of a method according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
[022] The following description of certain embodiments is merely exemplary in nature and is in no way intended to limit the invention or its applications or uses. In the following detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and which are shown by way of illustration specific embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the present system. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of the present system. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present system is defined only by the appended claims.
[023] During an example imaging procedure, an imaging system may be positioned near a subject to capture images of a particular field of view or zone of the subject. For example, the imaging system may include a handheld probe which a user positions at different locations on the subject’s body in order to capture images of one or more different zones. An automatic classification tool, such as a trained Al or machine learning model (MLM) may automatically classify different regions or objects of interest (e.g., organs) within the image. Based on the regions of interest identified, the MLM may also determine a zone visible in the image.
[024] As an example of an imaging exam, the Focused Assessment with Sonography for Trauma (FAST) exam is a rapid ultrasound exam conducted in trauma situations to assess patients for free- fluid. Different zones (e.g., region of the body) of a subject are scanned to search for free-fluid (e.g., blood) within the subject. Zones typically include the right upper quadrant (RUQ), the left upper quadrant (LUQ), and the pelvis (SP). Zones may further include the lung and heart. Each zone may include one or more regions of interest (ROIs), which may be organs or particular views of organs. For example, a typical FAST exam includes images of the kidney, liver, liver tip, diaphragm, spleen, kidney-liver interface, diaphragm-liver interface, and volume fanning acquired from the RUQ zone. In another example, during a typical FAST exam a subxiphoid view of the heart is acquired. Not every region of interest may be visible in every zone. For example, the RUQ will generally not include a view of the spleen.
[025] The FAST exam is an important ultrasound test in trauma situations to quickly assess patients for free-fluid (e.g., blood due to internal hemorrhaging). The test aids in determining the severity of the injury, allowing for timely and accurate decision-making in patient care. Adequate image quality is particularly important in the FAST exam as it is primarily used in critical care situations, where even small amounts of free-fluid can indicate a significant injury. Having a good understanding of the shape and location of organs in each zone can help inexperienced physicians/sonographers obtain high-quality images. Understanding the anatomy of each zone (e.g., Right Upper Quadrant, Left Upper Quadrant) can guide data acquisition and improve the quality of images obtained during the exam. Therefore, knowledge about the zones and organs is important to achieving optimal data acquisition in the FAST exam, which is necessary for accurate diagnosis and treatment.
[026] Artificial Intelligence (Al) has been widely used in medical data analysis, particularly in medical images. Various classification and detection models have been developed to classify images and detect regions of interest (ROI) in them. However, the accuracy of these models often falls short of the professional standards due to the lack of sufficient data required for deep learning models. In medical imaging, achieving high sensitivity is crucial, particularly in cases where the detection of ROIs can impact patient mortality and influence clinical decision-making. When Al models fail to reach reasonable accuracy levels, incorporating additional information can enhance sensitivity and improve the reliability of the system. Increasing sensitivity can also foster greater trust in the system, leading to increased confidence in the model by the clinical team.
[027] An imaging system of the present disclosure includes a probe which includes an inertial modeling unit (IMU). The IMU collects positioning data related to a position of the probe while the imaging system is collecting an image. A trained MLM automatically classifies or identifies one or more regions of interest in the image. Based on the position information from the IMU, the imaging system verifies the classification performed by the MLM and, if needed, corrects the classification. For example based on the position information from the IMU, the system may determine a ‘true’ imaging zone. Based on that true imaging zone, the system may include a list of expected regions of interest in the imaging zone (e.g., based on anatomy). The expected list of ROIs is compared to the MLM classified ROIs to identify errors.. For example, the correction may include changing a label of which zone the image represents, removing an incorrect label from a misidentified ROI, changing a label of a misidentified ROI, or combinations thereof. The imaging system may generate display information (e.g., a labelled figure) based on the corrected classifications.
[028] In an example implementation, the imaging system may be an ultrasound system, and may include a handheld probe used to conduct a FAST exam. The handheld probe includes an IMU which collects position information such as the rotational orientation of the handheld probe. An ultrasound image of a target zone is collected by a positioning the handheld probe in a particular location relative to the subject. An MLM automatically identifies any ROIs within the zone based on the image. From those identified ROIs the MLM may also identify a preliminary zone (e.g., based on which organs were classified as being part of the image). A processor of the ultrasound system uses the orientation information from the IMU to determine the true zone. For example, the orientation of the handheld probe may determine if the image was collected facing to the right or left (relative to the patient) which may determine if the image is of the LUQ or RUQ. Based on the true zone, the processor may compare a list of expected ROIs (e.g., which organs should appear in an image of the true zone) to the MLM identified ROIs. If the MLM determined ROIs are not verified (e.g., one or more of the classified ROIs are not expected to appear in the true zone), the processor may perform any needed adjustments to the classification of one or more aspects of the image. F or example the processor may display the true zone as the label. In another example, if the MLM identifies an ROI which should not be present in the IMU determined zone (e.g., based on a model of what ROIs should appear in what zones), that erroneous label may be removed or changed.
[029] FIG. 1 is a block diagram of an imaging system according to some embodiments of the present disclosure. The imaging system 100 includes an imaging unit 110 and a computing unit 150. The imaging unit 110 may be positioned about a subject 102 in order to collect one or more images of the subject 102. For example, two positions of the imaging unit 110, Position A and Position B, are shown. The imaging unit 110 is communicatively coupled to the computing unit 150 and the imaging unit 110 provides imaging data and position information to the computing unit 150. The computing unit 150 generates images 166 based on the imaging data from the imaging unit 110, classifies the images 166 based on a trained MLM 162, and verifies (and if needed corrects) the classification based on position information 164 received from an IMU of the imaging unit 110.
[030] The imaging unit 110 includes an IMU 112 and an imaging modality 114. The imaging unit 110 may be positionable about the subject 102. For example, the imaging unit 110 may be a moveable probe, such as a handheld probe, which is positioned about the subject 102 in order to direct the imaging modality 114 to collect images of one or more zones.
[031] The IMU 112 generates position information based on a position of the imaging unit 110. For example, the position information may include information about the spatial location of the imaging unit 110 (e.g., its x, y, z coordinates), information about the rotational orientation of the imaging unit 110 (e.g., its pitch, yaw, and roll), or combinations thereof. In some embodiments, the IMU 110 may include an accelerometer, a gyroscope, or other tools for measuring position. For example, the IMU 110 may include a 3-axis accelerometer which may be used to measure a rotational orientation of the imaging unit 110 relative to the Earth’s gravitational field.
[032] The imaging unit 110 includes an imaging modality 114. The imaging modality 114 collects imaging data from the subject 102. The imaging modality 114 collects the imaging data based on a field of view of the subject 102, which may be based, in part, on the position of the imaging unit 110 relative to the subject 102. The structure and the operation of the imaging modality 114 may vary based on the type of the imaging system 100. For example, the imaging system 100 may be used for ultrasound imaging, photoacoustic imaging, optical imaging, or combinations thereof.
[033] In an example embodiment where the imaging system 100 includes an ultrasound imaging system, the imaging modality 114 may include a transducer array, which directs ultrasound into the subject 102 and measures reflected sound energy (e.g., echoes) received from the subject 102, which can be used by the imaging system 100 to generate an image. In such embodiments, the imaging unit 110 may represent a handheld unit such as a handheld probe, which may be positioned on or near the subject 102 (e.g., in contact with the subject’s skin) for imaging regions of interest, such as organs 104 and 106, within a zone of the subject 102. As part of an imaging exam, the imaging unit 110 may be positioned at a first zone of the subject 102 (e.g., in Position A) to collect a first image (or images) and then may be positioned at a second zone of the subject 102 (e.g., at Position B) to collect a second image (or images).
[034] The imaging unit 110 provides imaging data from the imaging modality 114 and position information from the IMU 112 to the computing unit 150. In some embodiments, one or more of the components or functions described as being part of the computing unit 150 may be located in the imaging unit 110 instead. In some embodiments, the computing unit 150 and imaging unit 110 may be separate units which are coupled via wired and/or wireless communications. In some embodiments, all or part of the computing unit 150 may be at a location which is remote from the imaging unit 110 and subject. In some embodiments the computing unit 150 may be a general purpose computer running software to operate the imaging system 100. For example, the computing unit 150 may be a desktop, laptop, or tablet computer.
[035] The computing unit 150 includes a computer readable memory 160 which may be loaded with one or more instructions 170. The computing unit 150 includes one or more processors 152 which operate the instructions 170. The computing unit 150 also includes a communication module 154 to enable communications with the imaging unit 110 and/or with external systems (e.g., a data server, the internet, local networks, etc.), an input system 156 which allows a user to interact with the imaging system 100, and a display 158 which presents information to the user. The input system 156 may include a mouse, keyboard, touch screen, or combinations thereof.
[036] The memory 160 stores a set of instructions 170, as well as various other types of information which may be useful. For example, the memory 170 may store a trained machine learning model 162, position information 164 acquired from the IMU 112, and images 166 generated from the imaging data generated by the imaging modality 114. The memory 160 may also store information such as position matching information 165, which determines an imaging position of the imaging unit 110 based on the position information 164, and an ROI model 168 (or expected ROI information), which may indicate which types of ROIs are present in which imaging zones.
[037] The computing unit 150 may perform one or more processing steps to generate the images 166. For example, if the imaging system is an ultrasound system, then the imaging data may represent raw signals from an ultrasound transducer, and processing may be used to generate images based on those raw signals. The position information 164 is collected from the IMU 112. The position information 164 and images 166 may be associated with each other. For example, both the position information 164 and images 166 may be time-stamped, and the images 166 may be associated with the position information 164 collected at or around the time the image was collected. For example each image may be matched to a set of position information.
[038] The instructions 170 include instruction 172, which describes classifying a region of interest in an image based on the MLM 162. The MLM 162 may be trained to recognize regions of interest within the images. For example, the MLM 162 may be trained based on previously recorded images which have been labelled. In some embodiments, the MLM 162 may make an initial zone determination based on the classified ROIs in the image. The MLM 162 may be trained to recognize regions of interest, such as organs, within the view. For example the MLM 162 may identify organs such as the liver, diaphragm, kidney, and spleen when they are present within the image.
[039] The instructions 170 include instruction 174, which describes determining the imaged zone based on position information from the IMU 112. The memory 160 may store position matching information 165 which determines which zone is imaged based on the position information 164. For example, in the example positions shown, a first zone may be imaged from Position A, while a second zone may be imaged from Position B. To change position between Position A and Position B, there is a 90° rotation in the XY plane and a change in location in the -Y and -X directions. The position information may reflect one or more of these changes and the position matching information 165 may use one or more of those changes to determine which imaging position is reflected in the position information 164. For example, if the IMU 112 includes a 3- axis accelerometer, then the axis along which the Earth’s gravity vector is recorded will shift as the unit is rotated between Position A and Position B. Accordingly, the position matching information 165 may set up one or more criteria based on which axis gravity appears along (and which direction it is pointing relative to the accelerometer) to determine the orientation of the imaging unit 110, which in turn may determine the position of the imaging unit 110 relative to the subject and thus the zone being imaged. For example, if gravity primarily appears in an axis oriented vertically with respect to the imaging unit 110, it may indicate that the imaging unit 110 is in Position A, but if gravity primarily appears in an axis oriented horizontally with respect to the imaging unit 110, it may indicate that the imaging unit 110 is in Position B.
[040] The instructions 170 include instruction 176 which describes verifying (and if not verified, correcting) the classification performed by the MLM 162 based on the zone determined based on the position information 164. For example, the zone determined based on the IMU 112 position information 164 may generally be considered to be the ‘true’ zone since it is based off of measured information about position of the imaging unit 110 during imaging. . The memory 160 may include an ROI model 168, which may include a library of what types of ROIs should be visible in different zones. Based on the true zone identified based on the position information, a list of expected ROIs may be retrieved from the ROI model 168 and compared to the ROIs classified by the MLM 162. If the classified ROIs do not match the expected ROIs, then the classified ROIs may have their classifications removed or corrected.
[041] Referring to example positions A and B, the two example ROIS 104 and 106 may both be expected to be seen from images collected from Position A, however only ROI 104 may be expected to be seen from Position B. The ROI model 168 may be generated based on the known anatomy of the different zones which are imaged in a subject. If the MLM determined zone is not verified and its corrected to the true zone, the ROI model 168 may be compared to the ROIs classified by the MLM to determine if any of the classifications should be changed. For example if the ROI classifies an object as being ROI 106, but the computing unit 150 has determined that the imaged zone is from Position B, then the classification of an object as ROI 106 may be determined to be incorrect, and may be corrected. In some embodiments, the classification of the incorrect ROI may be removed and that ROI may go unclassified in the verified image. In some embodiments, the classification of the ROI may be changed from the incorrect classification to a different classification. For example, if it is known that the MLM 162 is prone to misclassifying ROI 104 as ROI 106 from Position B (e.g., because a similar appearance), then the classification of ROI 106 may be changed to ROI 104 in the verified image.
[042] The imaging system 100 may present information to the user via the display 158. For example the imaging system 100 may apply labels to the image 166 which indicate the zone being imaged as well as any classified ROIs present in the image (as determined by the MLM 162 and corrected by the position information 164). In some embodiments, the imaging system 150 may present labels on pre-recording images 166. In some embodiments, the imaging system 150 may stream the images 166 ‘live’ onto the display and may classify those images and verify and correct in real-time or close to real-time.
[043] In some embodiments, the imaging system 150 may present additional feedback to the user. For example, if the IMU 112 data indicates that the imaging unit 110 is being held backwards (e.g., upside down) but is otherwise positioned correctly, the imaging system 100 may prompt the user to correct the positioning of the imaging unit 110. For example, if the IMU 112 includes a 3 -axis accelerometer, then if it is being held upside down, its position information may match one of the known patterns of position information in the position matching information 165, except that the signals are inverted (e.g., because the probe 110 and thus the accelerometer are upside down relative to the expected gravity vector). The imaging system 100 may display a message, sound an alert and/or provide other feedback to prompt the user to correct the positioning before the imaging continues.
[044] In some embodiments, in addition to, or instead of prompting the user, the imaging system
100 may perform appropriate transformations to the images 166 before providing them to the MLM 162 for classification. For example, if the computing unit 150 determines that the imaging unit 110 is upside down, then the images may be flipped about the x-axis to make them right side up. [045] FIG. 2 shows a block diagram of an ultrasound imaging system 200 according to some embodiments of the present disclosure. The ultrasound imaging system 200 may, in some embodiments, implement the imaging system 100 of Figure 1. The ultrasound imaging system 200 includes a probe 212 (e.g., imaging unit 110 of Figure 1) and processing circuitry 250 (e.g., computing unit 150 of Figure 1).
[046] An ultrasound imaging system 200 according to the present disclosure may include a transducer array 214, which may be included in an ultrasound probe 212, for example an external probe or an internal probe such as an intravascular ultrasound (IVUS) catheter probe. In other embodiments, the transducer array 214 may be in the form of a flexible array configured to be conformally applied to a surface of subject to be imaged (e.g., patient). The transducer array 214 is configured to transmit ultrasound signals (e.g., beams, waves) and receive echoes responsive to the ultrasound signals. A variety of transducer arrays may be used, e.g., linear arrays, curved arrays, or phased arrays. The transducer array 214, for example, can include a two dimensional array (as shown) of transducer elements capable of scanning in both elevation and azimuth dimensions for 2D and/or 3D imaging. As is generally known, the axial direction is the direction normal to the face of the array (in the case of a curved array the axial directions fan out), the azimuthal direction is defined generally by the longitudinal dimension of the array, and the elevation direction is transverse to the azimuthal direction.
[047] The ultrasound probe 212 includes an IMU 213 (e.g., 112 of Figure 1), which records information about the position of the probes 212 and provides that information as a stream of data which can be stored as position information. For example, the IMU 213 may include a 3 -axis accelerometer, and may provide position information which includes 3 channels, each of which represents the component of an acceleration vector relative to one of the three axes.
[048] In some embodiments, the transducer array 214 may be coupled to a microbeamformer 216, which may be located in the ultrasound probe 212, and which may control the transmission and reception of signals by the transducer elements in the array 214. In some embodiments, the microbeamformer 216 may control the transmission and reception of signals by active elements in the array 214 (e.g., an active subset of elements of the array that define the active aperture at any given time). [049] In some embodiments, the microbeamformer 216 may be coupled, e.g., by a probe cable or wirelessly, to a transmit/receive (T/R) switch 218, which switches between transmission and reception and protects the main beamformer 222 from high energy transmit signals. In some embodiments, for example in portable ultrasound systems, the T/R switch 218 and other elements in the system can be included in the ultrasound probe 212 rather than in the ultrasound system base, which may house the image processing electronics. An ultrasound system base typically includes software and hardware components including circuitry for signal processing and image data generation as well as executable instructions for providing a user interface.
[050] The transmission of ultrasonic signals from the transducer array 214 under control of the microbeamformer 216 is directed by the transmit controller 220, which may be coupled to the T/R switch 218 and a main beamformer 222. The transmit controller 220 may control the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 214, or at different angles for a wider field of view. The transmit controller 220 may also be coupled to a user interface 224 and receive input from the user's operation of a user control. The user interface 224 may include one or more input devices such as a control panel 252, which may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
[051] In some embodiments, the partially beamformed signals produced by the microbeamformer 216 may be coupled to a main beamformer 222 where partially beamformed signals from individual patches of transducer elements may be combined into a fully beamformed signal. In some embodiments, microbeamformer 216 is omitted, and the transducer array 214 is under the control of the beamformer 222 and beamformer 222 performs all beamforming of signals. In embodiments with and without the microbeamformer 216, the beamformed signals of beamformer 222 are coupled to processing circuitry 250, which may include one or more processors (e.g., a signal processor 226, a B-mode processor 228, a Doppler processor 260, and one or more image generation and processing components 268) configured to produce an ultrasound image from the beamformed signals (i.e., beamformed RF data).
[052] The signal processor 226 may be configured to process the received beamformed RF data in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation. The signal processor 226 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination. The processed signals (also referred to as I and Q components or IQ signals) may be coupled to additional downstream signal processing circuits for image generation. The IQ signals may be coupled to a number of signal paths within the system, each of which may be associated with a specific arrangement of signal processing components suitable for generating different types of image data (e.g., B-mode image data, Doppler image data). For example, the system may include a B-mode signal path 258 which couples the signals from the signal processor 226 to a B-mode processor 228 for producing B-mode image data.
[053] The B-mode processor can employ amplitude detection for the imaging of structures in the body. The signals produced by the B-mode processor 228 may be coupled to a scan converter 230 and/or a multiplanar reformatter 232. The scan converter 230 may be configured to arrange the echo signals from the spatial relationship in which they were received to a desired image format. For instance, the scan converter 230 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal or otherwise shaped three dimensional (3D) format. The multiplanar reformatter 232 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image (e.g., a B-mode image) of that plane, for example as described in U.S. Pat. No. 6,443,896 (Detmer). The scan converter 230 and multiplanar reformatter 232 may be implemented as one or more processors in some embodiments.
[054] A volume Tenderer 234 may generate an image (also referred to as a projection, render, or rendering) of the 3D dataset as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.). The volume Tenderer 234 may be implemented as one or more processors in some embodiments. The volume Tenderer 234 may generate a render, such as a positive render or a negative render, by any known or future known technique such as surface rendering and maximum intensity rendering.
[055] In some embodiments, the system may include a Doppler signal path 262 which couples the output from the signal processor 226 to a Doppler processor 260. The Doppler processor 260 may be configured to estimate the Doppler shift and generate Doppler image data. The Doppler image data may include color data which is then overlaid with B-mode (i.e. grayscale) image data for display. The Doppler processor 260 may be configured to filter out unwanted signals (i.e., noise or clutter associated with non-moving tissue), for example using a wall filter. The Doppler processor 260 may be further configured to estimate velocity and power in accordance with known techniques. For example, the Doppler processor may include a Doppler estimator such as an autocorrelator, in which velocity (Doppler frequency) estimation is based on the argument of the lag- one autocorrelation function and Doppler power estimation is based on the magnitude of the lagzero autocorrelation function. Motion can also be estimated by known phase-domain (for example, parametric frequency estimators such as MUSIC, ESPRIT, etc.) or time-domain (for example, cross-correlation) signal processing techniques. Other estimators related to the temporal or spatial distributions of velocity such as estimators of acceleration or temporal and/or spatial velocity derivatives can be used instead of or in addition to velocity estimators. In some embodiments, the velocity and power estimates may undergo further threshold detection to further reduce noise, as well as segmentation and post-processing such as filling and smoothing. The velocity and power estimates may then be mapped to a desired range of display colors in accordance with a color map. The color data, also referred to as Doppler image data, may then be coupled to the scan converter 130, where the Doppler image data may be converted to the desired image format and overlaid on the B-mode image of the tissue structure to form a color Doppler or a power Doppler image.
[056] According to examples of the present disclosure, output from the scan converter 230, such as B-mode images and Doppler images, referred to collectively as ultrasound images, may be provided to classification and verification processor 270 (e.g., which may perform the steps 172- 176 of Figure 1). The ultrasound images may be 2D and/or 3D. In some examples, the classification/verifi cation processor 170 may be implemented by one or more processors and/or application specific integrated circuits. The classification/verification processor 170 may analyze the 2D and/or 3D images to classify the ROIs represented by an image, determine a true zone based on the data from the IMU associated with the image, and verify and/or correct the classification of the ROIs based on the determined true zone.
[057] In some examples, the classification/verification processor 270 may include any one or more machine learning, artificial intelligence algorithms, and/or multiple neural networks, collectively referred to as machine learning models (MLM) 272 (e.g., MLM 162 of Figure 1). The MLM 272 may include a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), an autoencoder neural network, or the like. The MLM 272 may be implemented in hardware (e.g., neurons are represented by physical components) and/or software (e.g., neurons and pathways implemented in a software application) components. The MLM 272 implemented according to the present disclosure may use a variety of topologies and algorithms for training the MLM 272 to produce the desired output. For example, a softwarebased neural network may be implemented using a processor (e.g., single or multi-core CPU, a single GPU or GPU cluster, or multiple processors arranged for parallel-processing) configured to execute instructions, which may be stored in computer readable medium, and which when executed cause the processor to perform a trained algorithm for classification of an organ, anatomical feature(s), and/or a view of an ultrasound image (e.g., an ultrasound image received from the scan converter 230). In some examples, the processor may perform a trained algorithm for identifying a zone and/or quality of an ultrasound image. In various embodiments, the MLM 272 may be implemented, at least in part, in a computer-readable medium including executable instructions executed by the completeness processor 270. These are merely example implementations of MLMs, and other potential implementations may be used in other example embodiments.
[058] In various examples, the MLM 272 may be trained using any of a variety of currently known or later developed learning techniques to obtain a neural network (e.g., a trained algorithm or hardware-based system of nodes) that is configured to analyze input data in the form of ultrasound images, measurements, and/or statistics. In some embodiments, the MLM 272 may be statically trained. That is, the MLM may be trained with a data set and deployed on the classification/verification processor 270. In some embodiments, the MLM 272 may be dynamically trained. In these embodiments, the MLM 272 may be trained with an initial data set and deployed on the completeness processor 270. However, the MLM 272 may continue to train and be modified based on ultrasound images acquired by the system 200 after deployment of the MLM 272 on the completeness processor 270.
[059] In some embodiments, the classification/verification processor 270 may not include a MLM 272 and may instead implement other image processing techniques for feature recognition and/or quality detection such as image segmentation, histogram analysis, edge detection or other shape or object recognition techniques. In some embodiments, the completeness processor 270 may implement the MLM 272 in combination with other image processing methods. In some embodiments, the MLM 272 and/or other elements may be selected by a user via the user interface 224.
[060] Outputs from the classification/verifi cation processor 270, the scan converter 230, the multiplanar reformatter 232, and/or the volume Tenderer 234 may be coupled to an image processor 236 for further enhancement, buffering and temporary storage before being displayed on an image display 238. Although output from the scan converter 230 is shown as provided to the image processor 236 via the classification/verifi cation processor 270, in some embodiments, the output of the scan converter 230 may be provided directly to the image processor 236.
[061] A graphics processor 240 may generate graphic overlays for display with the images. According to examples of the present disclosure, based at least in part on the analysis of the images, the classification/verifi cation processor 270 may provide display data for the zone of the current image as well as the types of ROIs present in the image. The graphics processor 240 may overlay these (verified or corrected) classifications over the image on the display 238. For example, the classified ROIs may be displayed as labels over the region of the image identified as the ROI.
[062] Additional or alternative graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the graphics processor may be configured to receive input from the user interface 224, such as a typed patient name or other annotations. The user interface 224 can also be coupled to the multiplanar reformatter 232 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
[063] The system 200 may include local memory 242. Local memory 242 may be implemented as any suitable non-transitory computer readable medium (e.g., flash drive, disk drive). Local memory 242 may store data generated by the system 200 including ultrasound images, position information from the IMU 213, executable instructions, imaging parameters, training data sets, or any other information necessary for the operation of the system 200. In some examples, the local memory 242 may store executable instructions in a non-transitory computer readable medium that may be executed by the classification/verification processor 270. In some examples, the local memory 242 may store ultrasound images and/or videos responsive to instructions from the classification/verification processor 270. In some examples, local memory 242 may store other outputs of the classification/verification processor 270, such as the (verified and/or corrected) identification of ROIs in the image.
[064] As mentioned previously system 200 includes user interface 224. User interface 224 may include display 238 and control panel 252. The display 238 may include a display device implemented using a variety of known display technologies, such as LCD, LED, OLED, or plasma display technology. In some embodiments, display 238 may include multiple displays. The control panel 252 may be configured to receive user inputs (e.g., exam type, information calculated by and/or displayed from the classification/verification processor 270). The control panel 252 may include one or more hard controls (e.g., buttons, knobs, dials, encoders, mouse, trackball or others). In some embodiments, the control panel 252 may additionally or alternatively include soft controls (e.g., GUI control elements or simply, GUI controls) provided on a touch sensitive display. In some embodiments, display 238 may be a touch sensitive display that includes one or more soft controls of the control panel 252.
[065] In some embodiments, various components shown in FIG. 2 may be combined. For instance, classification/verification processor 270, image processor 236 and graphics processor 240 may be implemented as a single processor. In some embodiments, various components shown in FIG. 2 may be implemented as separate components. For example, signal processor 226 may be implemented as separate signal processors for each imaging mode (e.g., B-mode, Doppler). In some embodiments, one or more of the various processors shown in FIG. 2 may be implemented by general purpose processors and/or microprocessors configured to perform the specified tasks. In some embodiments, one or more of the various processors may be implemented as application specific circuits. In some embodiments, one or more of the various processors (e.g., image processor 136) may be implemented with one or more graphical processing units (GPU).
[066] FIG. 3 is a block diagram illustrating an example processor 300 according to some embodiments of the present disclosure. Processor 300 may be used to implement one or more processors and/or controllers described herein, for example, processors 152 of Figure 1, classification/verification processor 270, image processor 236 shown in FIG. 2 and/or any other processor or controller shown in FIG. 2. Processor 300 may be any suitable processor type including, but not limited to, a microprocessor, a microcontroller, a digital signal processor (DSP), a field programmable array (FPGA) where the FPGA has been programmed to form a processor, a graphical processing unit (GPU), an application specific circuit (ASIC) where the ASIC has been designed to form a processor, or a combination thereof.
[067] The processor 300 may include one or more cores 302. The core 302 may include one or more arithmetic logic units (ALU) 304. In some embodiments, the core 302 may include a floating point logic unit (FPLU) 306 and/or a digital signal processing unit (DSPU) 308 in addition to or instead of the ALU 304.
[068] The processor 300 may include one or more registers 312 communicatively coupled to the core 302. The registers 312 may be implemented using dedicated logic gate circuits (e.g., flipflops) and/or any memory technology. In some embodiments the registers 312 may be implemented using static memory. The register may provide data, instructions and addresses to the core 302.
[069] In some embodiments, processor 300 may include one or more levels of cache memory 310 communicatively coupled to the core 302. The cache memory 310 may provide computer- readable instructions to the core 302 for execution. The cache memory 310 may provide data for processing by the core 302. In some embodiments, the computer-readable instructions may have been provided to the cache memory 310 by a local memory, for example, local memory attached to the external bus 316. The cache memory 310 may be implemented with any suitable cache memory type, for example, metal-oxide semiconductor (MOS) memory such as static random access memory (SRAM), dynamic random access memory (DRAM), and/or any other suitable memory technology.
[070] The processor 300 may include a controller 314, which may control input to the processor 300 from other processors and/or components included in a system (e.g., control panel 252 and scan converter 230 shown in FIG. 2) and/or outputs from the processor 300 to other processors and/or components included in the system (e.g., display 238 and volume Tenderer 234 shown in FIG. 2). Controller 314 may control the data paths in the ALU 304, FPLU 306 and/or DSPU 308. Controller 314 may be implemented as one or more state machines, data paths and/or dedicated control logic. The gates of controller 314 may be implemented as standalone gates, FPGA, ASIC or any other suitable technology. [071] The registers 312 and the cache 310 may communicate with controller 314 and core 302 via internal connections 320A, 320B, 320C and 320D. Internal connections may implemented as a bus, multiplexor, crossbar switch, and/or any other suitable connection technology.
[072] Inputs and outputs for the processor 300 may be provided via a bus 316, which may include one or more conductive lines. The bus 316 may be communicatively coupled to one or more components of processor 300, for example the controller 314, cache 310, and/or register 312. The bus 316 may be coupled to one or more components of the system, such as display 338 and control panel 352 mentioned previously.
[073] The bus 316 may be coupled to one or more external memories. The external memories may include Read Only Memory (ROM) 332. ROM 332 may be a masked ROM, Electronically Programmable Read Only Memory (EPROM) or any other suitable technology. The external memory may include Random Access Memory (RAM) 333. RAM 333 may be a static RAM, battery backed up static RAM, Dynamic RAM (DRAM) or any other suitable technology. The external memory may include Electrically Erasable Programmable Read Only Memory (EEPROM) 335. The external memory may include Flash memory 334. The external memory may include a magnetic storage device such as disc 336. In some embodiments, the external memories may be included in a system, such as ultrasound imaging system 100 of Figure 1 and/or 200 shown in Fig. 2. For example local memory 242 may include one or more of ROM 332, RAM 333, EEPROM 335, flash 334, and/or disc 336.
[074] In some examples, one or more processors, such as processor 300 may execute computer readable instructions encoded on one or more of the memories (e.g., memories 160, 242, 332, 333, 335, 334, and/or 336). As noted, in some examples, processor 300 may be used to implement one or more processors of an ultrasound imaging system, such as ultrasound imaging system 200. In some examples, the memory encoded with the instructions may be included in the ultrasound imaging system, such as local memory 242. In some examples, the processor and/or memory may be in communication with one another and the ultrasound imaging system, but the processor and/or memory may not be included in the ultrasound imaging system. Execution of the instructions may cause the ultrasound imaging system to perform one or more functions. In some examples, a non- transitory computer readable medium may be encoded with instructions that when executed may cause an ultrasound imaging system to determine whether one or more anatomical features are included in an ultrasound image. In some examples, some or all of the functions may be performed by one processor. In some examples, some or all of the functions may be performed, at least in part, by multiple processors. In some examples, other components of the ultrasound imaging system may perform functions responsive to control signals provided by the processor based on the instructions. For example, the display may display the visual indication based, at least in part, on data received from one or more processors (e.g., graphics processor 240, which may include one or more processors 300).
[075] In some examples, the system 100 may be configured to implement one or more machine learning models, such as a neural network, included in the classification/verification processor 270. The MLM may be trained with imaging data such as image frames where one or more items of interest are labeled as present.
[076] In some embodiments, a MLM training algorithm associated with the MLM can be presented with thousands or even millions of training data sets in order to train the MLM to determine a confidence level for each measurement acquired from a particular ultrasound image. In various embodiments, the number of ultrasound images used to train the MLM may range from about 1,000 to 200,000 or more. The number of images used to train the MLM may be increased to accommodate a greater variety of patient variation, e.g., weight, height, age, etc. The number of training images may differ for different organs or features thereof, and may depend on variability in the appearance of certain organs or features. For example, the organs of pediatric patients may have a greater range of variability than organs of adult patients. Training the network(s) to determine the pose of an image with respect to an organ model associated with an organ for which population-wide variability is high may necessitate a greater volume of training images.
[077] FIG. 4 shows a block diagram of a process for training and deployment of a machine learning model according to some embodiments of the present disclosure. The process shown in FIG. 3 may be used to train the MLM 162 of Figure 1 and/or 272 of Figure 2 included in the classification/verification processor 270. The left hand side of FIG. 4, phase 1, illustrates the training of a MLM. To train the MLM, training sets which include multiple instances of input arrays and output classifications may be presented to the training algorithm(s) of the MLM (e.g., Al exNet training algorithm, as described by Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional Neural Networks,” NIPS 2012 or its descendants). Training may involve the selection of a starting (blank) architecture 412 and the preparation of training data 414. The starting architecture 412 may be an architecture (e.g., an architecture for a neural network with defined layers and arrangement of nodes but without any previously trained weights) or a partially trained network, such as the inception networks, which may then be further tailored for classification of ultrasound images. The starting architecture 412 (e.g., blank weights) and training data 414 are provided to a training engine 410 for training the MLM. Upon sufficient number of iterations (e.g., when the MLM performs consistently within an acceptable error), the model 420 is said to be trained and ready for deployment, which is illustrated in the middle of FIG. 4, phase 2. The right hand side of FIG. 4, or phase 3, the trained model 420 is applied (via inference engine 430) for analysis of new data 432, which is data that has not been presented to the model 420 during the initial training (in phase 1). For example, the new data 432 may include unknown images such as ultrasound images acquired during a scan of a patient (e.g., torso images acquired from a patient during a FAST exam). The trained model 420 implemented via engine 430 is used to classify the unknown images in accordance with the training of the model 420 to provide an output 434 (e.g., which anatomical features are included in the image, what zone the image was acquired from, or a combination thereof). The output 434 may then be used by the system for subsequent processes 440 (e.g., output of a MLM 162 may be used to classify zones and/or ROIs in the image and/or verification of the output based on IMU information). In embodiments where the MLM 162 is trained, the inference engine 430 may be modified by field training data 438. Field training data 438 may be generated in a similar manner as described with reference to phase 1, but the new data 432 may be used as the training data. In other examples, additional training data may be used to generate field training data 438.
[078] FIG. 5 is a flow chart of an example workflow of an imaging system according to some embodiments of the present disclosure. The workflow 500 may, in some embodiments, represent a method performed by an imaging system such as 100 of Figure 1 and/or 200 of Figure 2.
[079] The workflow 500 shows an example image 510 which is collected by an ultrasound system. The image 510 may represent a ‘raw’ image, after it has been processed but before it has been classified/verified, for example the image 510 may be the output of the image processor 236 to the classification/verification processor 270 of Figure 2. 1 [080] The image 510 is provided as an input to the block 520, which represents the operation of a classification/verification processor. Block 520 includes block 522, which describes detecting and classifying organs in the ultrasound image 510 and block 524 which describes verifying the detection/classifi cation based on position information from an IMU such as 112 of Figure 1 and/or 213 of Figure 2.
[081] Based on the steps of block 520, the imaging system may generate labelled image 530, which represents the image of 510, but with labels applied to represent the (verified) classified zone and organs within the image. For example, a label “RUQ” may be displayed on the labelled image 530 to indicate that the image has been classified as a view of the RUQ zone, and the labels liver, diaphragm, and kidney are displayed over the regions which the imaging system has classified as being those organs. The labels displayed in the labelled image 530 have been verified based on the zone determined by the IMU data. For example, the IMU data may agree with the MLM that this is a view of the RUQ, and thus it is appropriate for those three organs to appear.
[082] In some embodiments, the steps of the workflow 500 may represent steps that are repeated each time a new image is captured. In this manner the user may be presented with labeled images 530 in real-time or close to real-time (e.g., without undue delay by the processor). For example, the labelled images 530 may be updated at around a video rate.
[083] FIG. 6 is a set of diagrams which show examples of IMU position information at different patient positions according to some embodiments of the present disclosure. The diagrams each show an example of a data captured from a 3 -axis accelerometer which is part of an IMU such as IMU 112 of Figure 1 and/or 213 of Figure 2. FIG. 6 is divided into 3 example diagrams, 610, 620, and 630 each of which shows an example of the positioning of a probe next to a imagining dummy. The probe is positioned for imaging of the RUQ zone. Each set of diagrams also shows an example of the data from the IMU during imaging of the RUQ. The first diagram 610 shows imaging of the RUQ while the subject is in a supine position, the second diagram 620 shows imaging of the RUQ while the subject is in an inclined position, and the third diagram 630 shows imaging of the RUQ while the subject is in the sitting position.
[084] As can be seen, the IMU signal may be unique for each of the three imaging positions represented by diagrams 610-630, although the position of the probe relative to the subject may remain constant. The imaging system may have a set of criteria to determine which zone is being imaged based on the expected IMU signals from positions and different zones. Data sets such as those shown in FIG. 6 may be used to develop criteria (e.g., position matching information such as 165 of Figure 1) which may be used to determine which zone is being imaged based on the position information from the IMU.
[085] FIGS. 7A-7C are a set of images of example classification and verification steps according to some embodiments of the present disclosure. FIGS. 7A-7C show a number of images which represent different example steps in image acquisition, classification and verification by an imaging system such as 100 of Figure 1 and/or 200 of Figure 2.
[086] Each of FIGS. 7A-7C shows an example set of images and IMU data 710-740 which represent different example imaging, classification, and verification scenarios. Each of the FIGS. 7A-7C includes a raw image 710 which sows the unclassified image generated by an ultra sound system, a classified image 720, which shows the classification applied to the image 710 by an MLM, a graph of position information 730 from an IMU, and a verified image 740 which has been updated based on verification of the classified ROIs with the position information 730 from the IMU. The verified image 740 represents a view which would normally be presented to the user. The images 710 and 720, as well as position information 730, may normally not be displayed (unless enabled by settings).
[087] The verified images 740 include corrections applied to the classifications in the images 720. Each of FIGS. 7A-7C shows a different example scenario which illustrates types of verifications which may be performed.
[088] Figure 7A shows an image 710a which is classified in order to generate a labelled classified image 720a. The classification by the MLM has classified the liver, diaphragm and kidney within the image and thus determined that this is a view of the RUQ. The graph of position information 730a shows position information which matches the imaging of the RUQ. Accordingly, the classification by the MLM may be verified, and the verified image 740a matches the classified image 720a since no changes are needed.
[089] Figure 7B shows an image 710b which is classified in order to generate labelled classified image 720b. The MLM has determined that the image 720b includes the liver and is a view of the RUQ. However, the position information 730b does not match the position information expected for RUQ zone imaging, and instead matches SP imaging. Accordingly, the verified image 740b has changed the label from RUQ to SP, and has removed the label for the liver, since that organ is not expected to be seen in the SP zone (e.g., based on an ROI model such as 168).
[090] Figure 7C shows an image 710c which is classified by the MLM to generate labelled classified image 720C. Similar to the image 720a, the image 720c has been classified as an RUQ zone view with a liver, diaphragm and kidney. However, unlike in the Figure 7A, the IMU data 730c of Figure 7C does not match RUQ imaging, and instead represents LUQ imaging. Accordingly, the verified image 740c has been changed to show that this is a view of the LUQ zone. In addition, the liver does not generally appear in the LUQ zone. However, based on knowledge that the liver and spleen have very similar appearances (e.g., in the ROI model 168 of Figure 1), the label ‘liver’ has been altered to ‘spleen’ to represent the correct organ.
[091] FIG. 8 is a flow chart of a method according to some embodiments of the present disclosure. The method 800 may, in some embodiments, be implemented by one or more of the systems and apparatuses described herein, such as by the imaging system 100 of Figure 1 and/or the ultrasound imaging system 200 of Figure 2. The method 800 may be implemented in hardware, software, or a combination thereof. For example, the method 800 may represent instructions loaded into non- transitory computer readable memory such as memory 160 of Figure 1, 242 of Figure 2, and/or 332-336 of Figure 3 and executed by a processor such as 152 of Figure 1, 240/236/272 of Figure 2 and/or 300 of Figure 3.
[092] The method 800 may generally begin with box 810 which describes acquiring an image from a subject with an imaging unit. For example, the image may be acquired using imaging unit 110 of Figure 1. In some example embodiments, box 810 may include acquiring an ultrasound image from a subject with an ultrasound prove (e.g., 212 of Figure 2). The ultrasound imaging may include directing ultrasound into the subject and receiving reflected sound energy (echoes) with a transducer (e.g., 214 of Figure 2). The imaging may include generating the image based on received imaging state. The image may be collected as part of a series of images, for example as part of an exam. The method 800 may include performing an exam on the subject, such as a FAST exam.
[093] Box 810 may generally be followed by box 820, which describes acquiring position information associated with the image based on an IMU (e.g., 112 of Figure 1 and/or 213 of Figure 2) in the imaging unit. The method 800 may include measuring the position of the imaging unit with the IMU and generating the position information based on the measured position. For example, the IMU may include an accelerometer, and the method 800 may include measuring an orientation of the imaging unit.
[094] In some embodiments, the steps of boxes 810 and 820 may happen more or less simultaneously with each other. For example, the method 800 may include measuring the position information while acquiring the image.
[095] Box 820 may be followed by box 830 which describes classifying one or more regions of interest in the image with an MLM (e.g., 162 of Figure 1 and/or 272 of Figure 2). For example, the method 800 may include classifying one or more regions of the image as an organ or part thereof.
[096] Box 830 may be followed by box 840, which describes verifying the classified regions of interest based on the position information. For example, the method 800 may include determining a true imaging zone based on the position information and comparing expected regions of interest based on the true imaging zone to the ROIs classified by the MLM. If the classified and expected ROIs match, then the classification of the ROIs may be verified and the classification may be unchanged. If the expected and classified ROIs do not match then the method may include changing or removing the classification of the ROIs. For example, the method 800 may include comparing the classified ROIs to an ROI model (e.g., 168) and removing or changing any classifications of types of ROIs which are not expected in the true imaging zone. In some embodiments, the method 800 may include changing a classification of one of the regions of interest from a first organ to a second organ based on a similarity between the appearance of the first organ and the second organ and the presence of the second organ but not the first organ in the verified classified imaging zone. For example, a classification of liver in the MLM classified RUQ zone, may be changed to a classification of spleen in the IMU determined true LUQ zone.
[097] In some embodiments, the method 800 may include generating display data which includes the image with a label based on the verified imaging zone. The display data may also include labels for the verified ROIs in the image.
[098] In some embodiments, the method 800 may include determining if the imaging unit is correctly position based on the position information. For example, the method 800 may include determining if the imaging prove is upside down or not. In some embodiments, the method 800 may include alerting a user (e.g., via a tone, a display, or combinations thereof) if the imaging unit is not in the correct position. In some embodiments, the method 800 may include applying a transformation to the images before providing the images to the MLM for classification. For example, the method 800 may include flipping the image if the imaging unit is upside down.
[099] While many of the examples provided herein refer to the FAST exam, the disclosure is not limited to FAST exams. For example, any ultrasound exam that has a set of standard images, videos, measurements, or a combination thereof, associated with the exam may utilize the features of the present disclosure.
[0100] In various embodiments where components, systems and/or methods are implemented using a programmable device, such as a computer-based system or programmable logic, it should be appreciated that the above-described systems and methods can be implemented using any of various known or later developed programming languages, such as “C”, “C++”, “C#”, “Java”, “Python”, and the like. Accordingly, various storage media, such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device, such as a computer, to implement the above-described systems and/or methods. Once an appropriate device has access to the information and programs contained on the storage media, the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein. For example, if a computer disk containing appropriate materials, such as a source file, an object file, an executable file or the like, were provided to a computer, the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
[0101] In view of this disclosure it is noted that the various methods and devices described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention. The functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
[0102] Although the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system. Certain additional advantages and features of this disclosure may be apparent to those skilled in the art upon studying the disclosure, or may be experienced by persons employing the novel system and method of the present disclosure. Another advantage of the present systems and method may be that conventional medical image systems can be easily upgraded to incorporate the features and advantages of the present systems, devices, and methods.
[0103] Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.
[0104] Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.

Claims

CLAIMS What is claimed is:
1. An imaging system comprising: an imaging unit configured to acquire an image from a subject, the imaging unit comprising an inertial measurement unit (IMU) configured to generate position information based on a position of the imaging unit; and a processor configured to: receive the image from the imaging unit; identify one or more regions of interest in the image; determine a true zone of the subject based on the position information; compare the identified one or more regions of interest to expected regions of interest based on the true zone; and remove or change the identification of the one or more regions of interest if they do not match the expected regions of the interest.
2. The imaging system of claim 1, wherein the one or more regions of interest represent organs and the expected regions of interest are a list of organs expected to be visible in an image of the true zone.
3. The imaging system of claim 1, wherein the processor implements a machine learning model configured to identify the one or more regions of interest based on the image.
4. The imaging system of claim 1, wherein the imaging system is an ultrasound imaging system and wherein the imaging unit is a probe which includes a transducer array.
5. The imaging system of claim 1, wherein the processor is further configured to generate display data based on the image and a label for the determined true zone based on the position information, and wherein the imaging system further comprises a display configured to display the display data.
6. The imaging system of claim 1, wherein the IMU comprises an accelerometer configured to generate the position information.
7. A non-transitory computer readable medium encoded with instructions that when executed, cause an imaging system to: classify one or more regions of interest in an image based on a machine learning model (MLM); determine a true imaging zone based on position information associated with the image; compare the classified one or more regions of interest to expected regions of interest based on the true imaging zone; and correct the classifications if the classified one or more regions of interest based on the MLM do not match the expected regions of interest based on the true imaging zone.
8. The non-transitory computer readable medium of claim 7, wherein the instructions, when executed, cause the imaging system to correct the classifications by changing or removing the classified one or more regions of interest.
9. The non-transitory computer readable medium of claim 8, wherein the instructions, when executed, cause the imaging system to remove the classification of selected ones of the one or more regions of interest if the selected ones are not expected to appear in the true imaging zone.
10. The non-transitory computer readable medium of claim 7, wherein the instructions, when executed, cause the imaging system to generate display data based on the corrected classifications.
11. A method comprising: acquiring an ultrasound image from a subject with an ultrasound probe; acquiring position information associated with the ultrasound image based on an inertial measurement unit (IMU) in the ultrasound probe; classifying one or more regions of interest in the image with a machine learning model (MLM); and verifying the classification of the one or more regions of interest based on the position information.
12. The method of claim 11, further comprising: comparing the classified one or more regions of interest to expected regions of interest based on the position information; and updating the classification of the one or more regions of interest if the classification does not match the expected one or more regions of interest.
13. The method of claim 12, further comprising removing a classification of selected ones of the one or more regions of interest if the imaging zone based on the position information does not match the imaging zone classified by the MLM.
14. The method of claim 12, further comprising changing a classification of selected ones of the one or more regions of interest if the imaging zone based on the position information does not match the imaging zone classified by the MLM.
15. The method of claim 14, further comprising changing a classification of one of the regions of interest from a first organ to a second organ based on a similarity between the appearance of the first organ and the second organ and the presence of the second organ but not the first organ in the verified classified imaging zone.
16. The method of claim 11, further comprising: determining a true imaging zone based on the position information; and verifying the classification based, in part, on the true imaging zone.
17. The method of claim 11, further comprising generating display data which includes the image with a label based on the verified classifications.
18. The method of claim 11, further comprising determining if the ultrasound probe is correctly positioned based on the position information.
19. The method of claim 18, further comprising: determining that the ultrasound probe is upside down; and flipping the ultrasound image if the ultrasound probe is upside down.
20. The method of claim 11 , further comprising performing a focused assessment with sonography for trauma (FAST) exam with the ultrasound probe, wherein the ultrasound image is part of the FAST exam.
PCT/EP2024/084209 2023-12-11 2024-12-02 Systems and methods for imaging screening Pending WO2025124940A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363608442P 2023-12-11 2023-12-11
US63/608,442 2023-12-11

Publications (1)

Publication Number Publication Date
WO2025124940A1 true WO2025124940A1 (en) 2025-06-19

Family

ID=93799681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/084209 Pending WO2025124940A1 (en) 2023-12-11 2024-12-02 Systems and methods for imaging screening

Country Status (1)

Country Link
WO (1) WO2025124940A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US20180153513A1 (en) * 2013-02-28 2018-06-07 Rivanna Medical Llc Localization of Imaging Target Regions and Associated Systems, Devices and Methods
US20190200963A1 (en) * 2016-09-16 2019-07-04 Fujifilm Corporation Ultrasound diagnostic apparatus and control method of ultrasound diagnostic apparatus
US20210068791A1 (en) * 2018-03-08 2021-03-11 Koninklijke Philips N.V. A system and method of identifying characteristics of ultrasound images
US20220148158A1 (en) * 2020-11-06 2022-05-12 EchoNous, Inc. Robust segmentation through high-level image understanding

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US20180153513A1 (en) * 2013-02-28 2018-06-07 Rivanna Medical Llc Localization of Imaging Target Regions and Associated Systems, Devices and Methods
US20190200963A1 (en) * 2016-09-16 2019-07-04 Fujifilm Corporation Ultrasound diagnostic apparatus and control method of ultrasound diagnostic apparatus
US20210068791A1 (en) * 2018-03-08 2021-03-11 Koninklijke Philips N.V. A system and method of identifying characteristics of ultrasound images
US20220148158A1 (en) * 2020-11-06 2022-05-12 EchoNous, Inc. Robust segmentation through high-level image understanding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEE HYEONWOO ET AL: "Automated Anatomical Feature Detection for Completeness of Abdominal FAST Exam", 2023 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS), IEEE, 3 September 2023 (2023-09-03), pages 1 - 4, XP034465601, DOI: 10.1109/IUS51837.2023.10306598 *

Similar Documents

Publication Publication Date Title
CN111683600B (en) Apparatus and method for obtaining anatomical measurements from ultrasound images
CN112040876A (en) Adaptive ultrasound scanning
JP7672398B2 (en) Systems and methods for image optimization - Patents.com
EP4125606B1 (en) Systems and methods for imaging and measuring epicardial adipose tissue
CN114795276B (en) Method and system for automatically estimating liver and kidney index from ultrasound images
WO2021099171A1 (en) Systems and methods for imaging screening
CN113194837A (en) System and method for frame indexing and image review
US9033883B2 (en) Flow quantification in ultrasound using conditional random fields with global consistency
US12396702B2 (en) Systems, methods, and apparatuses for quantitative assessment of organ mobility
US10319090B2 (en) Acquisition-orientation-dependent features for model-based segmentation of ultrasound images
US12422548B2 (en) Systems and methods for generating color doppler images from short and undersampled ensembles
CN113081030B (en) Method and system for assisting ultrasound scanning plane identification based on M-mode analysis
EP4210588B1 (en) Systems and methods for measuring cardiac stiffness
WO2025124940A1 (en) Systems and methods for imaging screening
US20240173007A1 (en) Method and apparatus with user guidance and automated image setting selection for mitral regurgitation evaluation
WO2024013114A1 (en) Systems and methods for imaging screening
WO2025087746A1 (en) Systems and methods for imaging screening
WO2025098957A1 (en) Ultrasound sweep evaluation systems and methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24817910

Country of ref document: EP

Kind code of ref document: A1