[go: up one dir, main page]

WO2025087746A1 - Systèmes et procédés de dépistage par imagerie - Google Patents

Systèmes et procédés de dépistage par imagerie Download PDF

Info

Publication number
WO2025087746A1
WO2025087746A1 PCT/EP2024/079083 EP2024079083W WO2025087746A1 WO 2025087746 A1 WO2025087746 A1 WO 2025087746A1 EP 2024079083 W EP2024079083 W EP 2024079083W WO 2025087746 A1 WO2025087746 A1 WO 2025087746A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
ultrasound
processor
ultrasound image
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/079083
Other languages
English (en)
Inventor
Hyeonwoo LEE
Mohsen ZAHIRI
Goutam GHOSHAL
Balasundar Iyyavu Raju
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Publication of WO2025087746A1 publication Critical patent/WO2025087746A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5292Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device

Definitions

  • the present disclosure pertains to imaging systems and methods for monitoring the progress of an imaging exam, more specifically, the present disclosure pertains to monitoring the progress of an ultrasound exam.
  • Ultrasound exams are valuable for a wide variety of diagnostic purposes such as fetal development monitoring, cardiac valve health assessment, liver disease monitoring, and detecting internal bleeding.
  • Accurate diagnosis from ultrasound images rely on capturing correct views of anatomy as well as quality of the images (e.g., resolution).
  • the diagnostic value of the images may decrease if insufficient and/or incorrect views of anatomy are obtained or the images are poor quality (e.g., blurred due to motion artefacts). Accordingly, techniques for ensuring completeness of ultrasound exams and quality of ultrasound images may be desirable.
  • the present disclosure addresses the challenges of conducting focused assessment with sonography in trauma (FAST) exams by determining a scan completeness score for each zone/region explored during the FAST exam.
  • the system described herein may provide a list of tasks for the zone being examined.
  • the system may automatically detect task completion based on anatomical features detected in the imagery and may provide a scan score/meter as feedback to the user. This may enhance exam quality and improve sensitivity of FAST exam irrespective of experience level.
  • the system may be used as a tool for physician training and/or used for automated skill level analysis of physicians during and/or after the training.
  • an ultrasound imaging system includes an ultrasound probe configured to acquire an ultrasound image from a subject, a display configured to provide the ultrasound image, and a processor.
  • the processor is configured to display on the display a set of tasks to be completed during an exam, receive the ultrasound image, determine whether one or more anatomical features are included in the ultrasound image, determine a status of a current task among the set of tasks based on the anatomical features included in the ultrasound image, and provide display data to the display based on the status of the current task, wherein the display is further configured to provide a visual indication of the status based on the display data.
  • the anatomical features include free fluid
  • the processor is further configured to indicate completion of the exam upon a determination that free fluid is included in the ultrasound image.
  • the free fluid detection may be displayed as a first task to be completed during the exam.
  • the processor is further configured to determine a zone within the subject where the ultrasound image was acquired, and provide second display data to the display based on the set of tasks associated with the zone.
  • the display may be further configured to provide a second visual indication of the set of tasks based on the second display data.
  • a completed task of the set of tasks is displayed differently than an uncompleted task of the set of tasks.
  • the processor implements a machine learning model configured to determine whether the anatomical features are included in the ultrasound image.
  • machine learning model is further configured to determine whether the free fluid is included in the ultrasound image.
  • the processor is further configured to determine whether the ultrasound image is high quality or low quality prior to determining whether the anatomical features are included.
  • a user interface is included that configured to receive an input from the user, where the input indicates an exam type, a zone of the subject, or a combination thereof.
  • non-transitory computer readable medium is provided that is encoded with instructions that when executed, cause an ultrasound imaging system to display on a display a set of tasks to be completed during an exam, receive an ultrasound image, determine whether one or more anatomical features are included in the ultrasound image wherein the anatomical features include free fluid, determine a status of a current task among the set of tasks based on the anatomical features included in the ultrasound image, provide display data to the display based on the status of the current task, wherein the display is further configured to provide a visual indication of the status based on the display data, and indicate completion of the exam upon a determination that free fluid is included in the ultrasound image.
  • free fluid detection is displayed as a first task to be completed during the exam.
  • the instructions when executed further cause the ultrasound imaging system to determine a zone within the subject where the ultrasound image was acquired, and provide second display data to the display based on the set of tasks associated with the zone, wherein the display is further configured to provide a second visual indication of the set of tasks based on the second display data.
  • a method of conducting an ultrasound exam includes acquiring an ultrasound image from a subject with an ultrasound probe, determining with at least one processor whether one or more anatomical features are included in the ultrasound image, determining a status of a task of the exam based on the anatomical features included in the ultrasound image, and providing on a display a visual indication of the status.
  • the anatomical features include free fluid
  • the exam is terminated when the at least one processor determines that free fluid is included in the ultrasound image.
  • the method further includes determining a zone within the subject where the ultrasound image was acquired, and providing a second visual indication of a set of tasks based on the zone.
  • a completed task of the set of tasks is displayed differently than an uncompleted task of the set of tasks.
  • the completed task is a different color than the uncompleted task.
  • the method further includes determining whether the ultrasound image is high quality or low quality prior to determining whether the anatomical features are included. [0021] In some examples, the method further includes determining whether the ultrasound image is high quality or low quality based, at least in part, on whether the anatomical features are included. [0022] In some examples, the method further includes saving to a memory the status of a completed task in a set of tasks stored as being associated with at least one of the zone or the anatomical feature identified.
  • the ultrasound image comprises a three-dimensional (3D) dataset.
  • the method further includes differentiating with the at least one processor the free fluid in the ultrasound image from other contained fluids in the ultrasound image.
  • FIG. 1 is a block diagram of an ultrasound system in accordance with examples of the present disclosure.
  • FIG. 2 is a block diagram illustrating an example processor in accordance with examples of the present disclosure.
  • FIG. 3 is a block diagram of a process for training and deployment of a neural network in accordance with examples of the present disclosure.
  • FIG. 4 shows example predictions made by a machine learning model compared to ground truth in accordance with examples of the present disclosure
  • FIG. 5 is a flowchart providing an overview of exam completeness analysis in accordance with the examples of the present disclosure.
  • FIG. 6 shows an example of a display providing a visual indication of tasks to be completed and completed in accordance with examples of the present disclosure.
  • FIG. 7 is a flow chart of a method according to examples of the present disclosure.
  • FIG. 8 is a flow chart of another method according to examples of the present disclosure.
  • the Focused Assessment with Sonography for Trauma (FAST) exam is a rapid ultrasound exam conducted in trauma situations to assess patients for free- fluid.
  • Different zones e.g., region of the body
  • Zones typically include the right upper quadrant (RUQ), the left upper quadrant (LUQ), and the pelvis (SP).
  • Zones may further include the lung and heart.
  • Each zone may include one or more regions of interest (ROIs), which may be organs or particular views of organs.
  • a typical FAST exam includes images of the kidney, liver, liver tip, diaphragm, kidney -liver interface, diaphragm-liver interface, and volume fanning acquired from the RUQ zone.
  • a subxiphoid view of the heart is acquired.
  • the FAST exam is a highly important step in triaging patient care in trauma situations.
  • the FAST exam is a highly valuable diagnostic tool in the trauma situations. For example, detection of free-fluid may allow diagnosis of internal bleeding and/or trauma to internal organs.
  • different studies have reported a large sensitivity range for the FAST exam.
  • the major factor contributing to low sensitivity exams is insufficient scanning by physicians. Inexperience or less experienced physicians often do not scan enough to interrogate the entire abdominal volume, leaving the free fluid exploration task incomplete.
  • Studies have found that novice users spend more time on FAST exam and imaged fewer points of interest as compared to experienced users.
  • POCUS point of care ultrasound
  • a machine learning model may be trained and deployed to determine what ROIs have been imaged. The determinations may be used detect and/or score completion of tasks within a scan (e.g., imaging an ROI and/or a view of an ROI), classify and/or score completeness of the scan.
  • FIG. 1 shows a block diagram of an ultrasound imaging system 100 constructed in accordance with the examples of the present disclosure.
  • An ultrasound imaging system 100 may include a transducer array 114, which may be included in an ultrasound probe 112, for example an external probe or an internal probe such as an intravascular ultrasound (IVUS) catheter probe.
  • the transducer array 114 may be in the form of a flexible array configured to be conformally applied to a surface of subject to be imaged (e.g., patient).
  • the transducer array 114 is configured to transmit ultrasound signals (e.g., beams, waves) and receive echoes responsive to the ultrasound signals.
  • transducer arrays may be used, e.g., linear arrays, curved arrays, or phased arrays.
  • the transducer array 114 can include a two dimensional array (as shown) of transducer elements capable of scanning in both elevation and azimuth dimensions for 2D and/or 3D imaging.
  • the axial direction is the direction normal to the face of the array (in the case of a curved array the axial directions fan out)
  • the azimuthal direction is defined generally by the longitudinal dimension of the array
  • the elevation direction is transverse to the azimuthal direction.
  • the transducer array 114 may be coupled to a microbeamformer 116, which may be located in the ultrasound probe 112, and which may control the transmission and reception of signals by the transducer elements in the array 114.
  • the microbeamformer 116 may control the transmission and reception of signals by active elements in the array 114 (e.g., an active subset of elements of the array that define the active aperture at any given time).
  • the microbeamformer 116 may be coupled, e.g., by a probe cable or wirelessly, to a transmit/receive (T/R) switch 118, which switches between transmission and reception and protects the main beamformer 122 from high energy transmit signals.
  • T/R switch 118 and other elements in the system can be included in the ultrasound probe 112 rather than in the ultrasound system base, which may house the image processing electronics.
  • An ultrasound system base typically includes software and hardware components including circuitry for signal processing and image data generation as well as executable instructions for providing a user interface.
  • the transmission of ultrasonic signals from the transducer array 114 under control of the microbeamformer 116 is directed by the transmit controller 120, which may be coupled to the T/R switch 118 and a main beamformer 122.
  • the transmit controller 120 may control the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 114, or at different angles for a wider field of view.
  • the transmit controller 120 may also be coupled to a user interface 124 and receive input from the user's operation of a user control.
  • the user interface 124 may include one or more input devices such as a control panel 152, which may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
  • a control panel 152 may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
  • the partially beamformed signals produced by the microbeamformer 116 may be coupled to a main beamformer 122 where partially beamformed signals from individual patches of transducer elements may be combined into a fully beamformed signal.
  • microbeamformer 116 is omitted, and the transducer array 114 is under the control of the beamformer 122 and beamformer 122 performs all beamforming of signals.
  • the beamformed signals of beamformer 122 are coupled to processing circuitry 150, which may include one or more processors (e.g., a signal processor 126, a B- mode processor 128, a Doppler processor 160, and one or more image generation and processing components 168) configured to produce an ultrasound image from the beamformed signals (i.e., beamformed RF data).
  • processors e.g., a signal processor 126, a B- mode processor 128, a Doppler processor 160, and one or more image generation and processing components 168, configured to produce an ultrasound image from the beamformed signals (i.e., beamformed RF data).
  • the signal processor 126 may be configured to process the received beamformed RF data in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation.
  • the signal processor 126 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination.
  • the processed signals (also referred to as I and Q components or IQ signals) may be coupled to additional downstream signal processing circuits for image generation.
  • the IQ signals may be coupled to a number of signal paths within the system, each of which may be associated with a specific arrangement of signal processing components suitable for generating different types of image data (e.g., B-mode image data, Doppler image data).
  • the system may include a B-mode signal path 158 which couples the signals from the signal processor 126 to a B-mode processor 128 for producing B-mode image data.
  • the B-mode processor can employ amplitude detection for the imaging of structures in the body.
  • the signals produced by the B-mode processor 128 may be coupled to a scan converter 130 and/or a multiplanar reformatter 132.
  • the scan converter 130 may be configured to arrange the echo signals from the spatial relationship in which they were received to a desired image format. For instance, the scan converter 130 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal or otherwise shaped three dimensional (3D) format.
  • the multiplanar reformatter 132 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image (e.g., a B-mode image) of that plane, for example as described in U.S. Pat. No. 6,443,896 (Detmer).
  • the scan converter 130 and multiplanar reformatter 132 may be implemented as one or more processors in some embodiments.
  • a volume Tenderer 134 may generate an image (also referred to as a projection, render, or rendering) of the 3D dataset as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.).
  • the volume Tenderer 134 may be implemented as one or more processors in some embodiments.
  • the volume Tenderer 134 may generate a render, such as a positive render or a negative render, by any known or future known technique such as surface rendering and maximum intensity rendering.
  • the system may include a Doppler signal path 162 which couples the output from the signal processor 126 to a Doppler processor 160.
  • the Doppler processor 160 may be configured to estimate the Doppler shift and generate Doppler image data.
  • the Doppler image data may include color data which is then overlaid with B-mode (i.e. grayscale) image data for display.
  • B-mode i.e. grayscale
  • the Doppler processor 160 may be configured to filter out unwanted signals (i.e., noise or clutter associated with non-moving tissue), for example using a wall filter.
  • the Doppler processor 160 may be further configured to estimate velocity and power in accordance with known techniques.
  • the Doppler processor may include a Doppler estimator such as an auto-correlator, in which velocity (Doppler frequency) estimation is based on the argument of the lag-one autocorrelation function and Doppler power estimation is based on the magnitude of the lag-zero autocorrelation function.
  • Motion can also be estimated by known phase-domain (for example, parametric frequency estimators such as MUSIC, ESPRIT, etc.) or time-domain (for example, cross-correlation) signal processing techniques.
  • Other estimators related to the temporal or spatial distributions of velocity such as estimators of acceleration or temporal and/or spatial velocity derivatives can be used instead of or in addition to velocity estimators.
  • output from the scan converter 130 may be provided to an completeness processor 170.
  • the ultrasound images may be 2D and/or 3D.
  • the completeness processor 170 may be implemented by one or more processors and/or application specific integrated circuits.
  • the completeness processor 170 may analyze the 2D and/or 3D images to detect/score task completeness, autorecord/automatically save video loops (e.g., a time series of images, cineloop), classify/score scan completeness, document scan completeness at the end of an exam, and/or a combination thereof.
  • the completeness processor 170 may include any one or more machine learning, artificial intelligence algorithms, and/or multiple neural networks, collectively referred to as machine learning models (MLM) 172.
  • the MLM 172 may include a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), an autoencoder neural network, or the like.
  • the MLM 172 may be implemented in hardware (e.g., neurons are represented by physical components) and/or software (e.g., neurons and pathways implemented in a software application) components.
  • the MLM 172 implemented according to the present disclosure may use a variety of topologies and algorithms for training the MLM 172 to produce the desired output.
  • a software-based neural network may be implemented using a processor (e.g., single or multicore CPU, a single GPU or GPU cluster, or multiple processors arranged for parallel-processing) configured to execute instructions, which may be stored in computer readable medium, and which when executed cause the processor to perform a trained algorithm for identifying an organ, anatomical feature(s), and/or a view of an ultrasound image (e.g., an ultrasound image received from the scan converter 130).
  • the processor may perform a trained algorithm for identifying a zone and/or quality of an ultrasound image.
  • the MLM 172 may be implemented, at least in part, in a computer-readable medium including executable instructions executed by the completeness processor 170.
  • MLM 172 may include You Only Look Once, Version 3 (YOLO V3) network.
  • YOLO V3 may be trained for organ and/or feature detection in images. The organ and/or feature detection may be used to determine whether a task has been completed (e.g., acquiring an image of the kidney-liver interface in the left upper quadrant during a FAST exam).
  • MLM 172 may include MobileNet network.
  • MobileNet may be trained for zone and/or image quality detection.
  • zone detection may be used to determine what zone (e.g., RUQ, LUQ, SP) of a subject is being imaged, and provide information on the tasks to be performed in said zone.
  • image quality detection may be used to determine whether an image including a recognized feature is of sufficient quality for diagnostic purposes.
  • the MLM 172 may be trained using any of a variety of currently known or later developed learning techniques to obtain a neural network (e.g., a trained algorithm or hardwarebased system of nodes) that is configured to analyze input data in the form of ultrasound images, measurements, and/or statistics.
  • the MLM 172 may be statically trained. That is, the MLM may be trained with a data set and deployed on the completeness processor 170.
  • the MLM 172 may be dynamically trained. In these embodiments, the MLM 172 may be trained with an initial data set and deployed on the completeness processor 170. However, the MLM 172 may continue to train and be modified based on ultrasound images acquired by the system 100 after deployment of the MLM 172 on the completeness processor 170.
  • the completeness processor 170 may not include a MLM 172 and may instead implement other image processing techniques for feature recognition and/or quality detection such as image segmentation, histogram analysis, edge detection or other shape or object recognition techniques.
  • the completeness processor 170 may implement the MLM 172 in combination with other image processing methods.
  • the MLM 172 and/or other elements may be selected by a user via the user interface 124.
  • Outputs from the completeness processor 170, the scan converter 130, the multiplanar reformatter 132, and/or the volume Tenderer 134 may be coupled to an image processor 136 for further enhancement, buffering and temporary storage before being displayed on an image display
  • output from the scan converter 130 is shown as provided to the image processor 136 via the completeness processor 170, in some embodiments, the output of the scan converter 130 may be provided directly to the image processor 136.
  • a graphics processor 140 may generate graphic overlays for display with the images.
  • the completeness processor 170 may provide display data for a list of tasks to be performed.
  • the graphics processor 140 may provide the list of tasks as a text list next to or at least partially overlaying the image.
  • the completeness processor 170 may provide outputs to the graphics processor 140 to alter the displayed list of tasks as the completeness processor 170 determines tasks are completed.
  • the text associated with completed tasks may change color (e.g., from red to green), format (e.g., strikethrough), or no longer displayed as part of the list.
  • the completeness processor 170 may provide display information for additional feedback information to the graphics processor 140, such as completeness and/or quality scores.
  • Additional or alternative graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like.
  • the graphics processor may be configured to receive input from the user interface 124, such as a typed patient name or other annotations.
  • the user interface 124 can also be coupled to the multiplanar reformatter 132 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
  • MPR multiplanar reformatted
  • the system 100 may include local memory 142.
  • Local memory 142 may be implemented as any suitable non-transitory computer readable medium (e.g., flash drive, disk drive).
  • Local memory 142 may store data generated by the system 100 including ultrasound images, executable instructions, imaging parameters, training data sets, or any other information necessary for the operation of the system 100.
  • the local memory 142 may store executable instructions in a non- transitory computer readable medium that may be executed by the completeness processor 170.
  • the local memory 142 may store ultrasound images and/or videos responsive to instructions from the completeness processor 170.
  • local memory 142 may store other outputs of the completeness processor 170, such as completeness scores.
  • User interface 124 may include display 138 and control panel 152.
  • the display 138 may include a display device implemented using a variety of known display technologies, such as LCD, LED, OLED, or plasma display technology. In some embodiments, display 138 may include multiple displays.
  • the control panel 152 may be configured to receive user inputs (e.g., exam type, information calculated by and/or displayed from the completeness processor 170).
  • the control panel 152 may include one or more hard controls (e.g., buttons, knobs, dials, encoders, mouse, trackball or others).
  • the control panel 152 may additionally or alternatively include soft controls (e.g., GUI control elements or simply, GUI controls) provided on a touch sensitive display.
  • display 138 may be a touch sensitive display that includes one or more soft controls of the control panel 152.
  • various components shown in FIG. 1 may be combined. For instance, completeness processor 170, image processor 136 and graphics processor 140 may be implemented as a single processor. In some embodiments, various components shown in FIG. 1 may be implemented as separate components. For example, signal processor 126 may be implemented as separate signal processors for each imaging mode (e.g., B-mode, Doppler). In some embodiments, one or more of the various processors shown in FIG. 1 may be implemented by general purpose processors and/or microprocessors configured to perform the specified tasks. In some embodiments, one or more of the various processors may be implemented as application specific circuits. In some embodiments, one or more of the various processors (e.g., image processor 136) may be implemented with one or more graphical processing units (GPU).
  • GPU graphical processing units
  • FIG. 2 is a block diagram illustrating an example processor 200 according to examples of the present disclosure.
  • Processor 200 may be used to implement one or more processors and/or controllers described herein, for example, completeness processor 170, image processor 136 shown in FIG. 1 and/or any other processor or controller shown in FIG. 1.
  • Processor 200 may be any suitable processor type including, but not limited to, a microprocessor, a microcontroller, a digital signal processor (DSP), a field programmable array (FPGA) where the FPGA has been programmed to form a processor, a graphical processing unit (GPU), an application specific circuit (ASIC) where the ASIC has been designed to form a processor, or a combination thereof.
  • DSP digital signal processor
  • FPGA field programmable array
  • GPU graphical processing unit
  • ASIC application specific circuit
  • the processor 200 may include one or more cores 202.
  • the core 202 may include one or more arithmetic logic units (ALU) 204.
  • ALU arithmetic logic unit
  • the core 202 may include a floating point logic unit (FPLU) 206 and/or a digital signal processing unit (DSPU) 208 in addition to or instead of the ALU 204.
  • FPLU floating point logic unit
  • DSPU digital signal processing unit
  • the processor 200 may include one or more registers 212 communicatively coupled to the core 202.
  • the registers 212 may be implemented using dedicated logic gate circuits (e.g., flip-flops) and/or any memory technology. In some embodiments the registers 212 may be implemented using static memory.
  • the register may provide data, instructions and addresses to the core 202.
  • processor 200 may include one or more levels of cache memory 210 communicatively coupled to the core 202.
  • the cache memory 210 may provide computer-readable instructions to the core 202 for execution.
  • the cache memory 210 may provide data for processing by the core 202.
  • the computer-readable instructions may have been provided to the cache memory 210 by a local memory, for example, local memory attached to the external bus 216.
  • the cache memory 210 may be implemented with any suitable cache memory type, for example, metal- oxide semiconductor (MOS) memory such as static random access memory (SRAM), dynamic random access memory (DRAM), and/or any other suitable memory technology.
  • MOS metal- oxide semiconductor
  • the processor 200 may include a controller 214, which may control input to the processor 200 from other processors and/or components included in a system (e.g., control panel 152 and scan converter 130 shown in FIG. 1) and/or outputs from the processor 200 to other processors and/or components included in the system (e.g., display 138 and volume Tenderer 134 shown in FIG. 1). Controller 214 may control the data paths in the ALU 204, FPLU 206 and/or DSPU 208. Controller 214 may be implemented as one or more state machines, data paths and/or dedicated control logic. The gates of controller 214 may be implemented as standalone gates, FPGA, ASIC or any other suitable technology.
  • the registers 212 and the cache 210 may communicate with controller 214 and core 202 via internal connections 220A, 220B, 220C and 220D.
  • Internal connections may implemented as a bus, multiplexor, crossbar switch, and/or any other suitable connection technology.
  • Inputs and outputs for the processor 200 may be provided via a bus 216, which may include one or more conductive lines.
  • the bus 216 may be communicatively coupled to one or more components of processor 200, for example the controller 214, cache 210, and/or register 212.
  • the bus 216 may be coupled to one or more components of the system, such as display 138 and control panel 152 mentioned previously.
  • the bus 216 may be coupled to one or more external memories.
  • the external memories may include Read Only Memory (ROM) 232.
  • ROM 232 may be a masked ROM, Electronically Programmable Read Only Memory (EPROM) or any other suitable technology.
  • the external memory may include Random Access Memory (RAM) 233.
  • RAM 233 may be a static RAM, battery backed up static RAM, Dynamic RAM (DRAM) or any other suitable technology.
  • the external memory may include Electrically Erasable Programmable Read Only Memory (EEPROM) 235.
  • the external memory may include Flash memory 234.
  • the external memory may include a magnetic storage device such as disc 236.
  • the external memories may be included in a system, such as ultrasound imaging system 100 shown in Fig. 1.
  • local memory 142 may include one or more of ROM 232, RAM 233, EEPROM 235, flash 234, and/or disc 236.
  • one or more processors may execute computer readable instructions encoded on one or more of the memories (e.g., memories 142, 232, 233, 235, 234, and/or 236).
  • processor 200 may be used to implement one or more processors of an ultrasound imaging system, such as ultrasound imaging system 100.
  • the memory encoded with the instructions may be included in the ultrasound imaging system, such as local memory 142.
  • the processor and/or memory may be in communication with one another and the ultrasound imaging system, but the processor and/or memory may not be included in the ultrasound imaging system. Execution of the instructions may cause the ultrasound imaging system to perform one or more functions.
  • a non-transitory computer readable medium may be encoded with instructions that when executed may cause an ultrasound imaging system to determine whether one or more anatomical features are included in an ultrasound image. Based on the anatomical features included in the ultrasound image, the ultrasound system may determine a status of a task, generate display data based on the status of the task, and cause a display, such as display 138, of the ultrasound imaging system to provide a visual indication of the status based on the display data.
  • some or all of the functions may be performed by one processor. In some examples, some or all of the functions may be performed, at least in part, by multiple processors. In some examples, other components of the ultrasound imaging system may perform functions responsive to control signals provided by the processor based on the instructions.
  • the display may display the visual indication based, at least in part, on data received from one or more processors (e.g., graphics processor 140, which may include one or more processors 200).
  • processors e.g., graphics processor 140, which may include one or more processors 200.
  • the system 100 may be configured to implement one or more machine learning models, such as a neural network, included in the completeness processor 170.
  • the MLM may be trained with imaging data such as image frames where one or more items of interest are labeled as present.
  • a MLM training algorithm associated with the MLM can be presented with thousands or even millions of training data sets in order to train the MLM to determine a confidence level for each measurement acquired from a particular ultrasound image.
  • the number of ultrasound images used to train the MLM may range from about 1,000 to 200,000 or more.
  • the number of images used to train the MLM may be increased to accommodate a greater variety of patient variation, e.g., weight, height, age, etc.
  • the number of training images may differ for different organs or features thereof, and may depend on variability in the appearance of certain organs or features. For example, the organs of pediatric patients may have a greater range of variability than organs of adult patients. Training the network(s) to determine the pose of an image with respect to an organ model associated with an organ for which population-wide variability is high may necessitate a greater volume of training images.
  • FIG. 3 shows a block diagram of a process for training and deployment of a machine learning model in accordance with examples of the present disclosure.
  • the process shown in FIG. 3 may be used to train the MLM 172 included in the completeness processor 170.
  • the left hand side of FIG. 3, phase 1, illustrates the training of a MLM.
  • training sets which include multiple instances of input arrays and output classifications may be presented to the training algorithm(s) of the MLM (e.g., AlexNet training algorithm, as described by Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional Neural Networks,” NIPS 2012 or its descendants).
  • AlexNet training algorithm as described by Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional Neural Networks,” NIPS 2012 or its descendants.
  • Training may involve the selection of a starting (blank) architecture 312 and the preparation of training data 314.
  • the starting architecture 312 may be a architecture (e.g., an architecture for a neural network with defined layers and arrangement of nodes but without any previously trained weights) or a partially trained network, such as the inception networks, which may then be further tailored for classification of ultrasound images.
  • the starting architecture 312 e.g., blank weights
  • training data 314 are provided to a training engine 310 for training the MLM.
  • the model 320 Upon sufficient number of iterations (e.g., when the MLM performs consistently within an acceptable error), the model 320 is said to be trained and ready for deployment, which is illustrated in the middle of FIG. 3, phase 2.
  • the trained model 320 is applied (via inference engine 330) for analysis of new data 332, which is data that has not been presented to the model 320 during the initial training (in phase 1).
  • the new data 332 may include unknown images such as ultrasound images acquired during a scan of a patient (e.g., torso images acquired from a patient during a FAST exam).
  • the trained model 320 implemented via engine 330 is used to classify the unknown images in accordance with the training of the model 320 to provide an output 334 (e.g., which anatomical features are included in the image, what zone the image was acquired from, quality of the image, or a combination thereof).
  • the output 334 may then be used by the system for subsequent processes 340 (e.g., output of a MLM 172 may be used by the completeness processor 170 to provide a list of completed and outstanding exam tasks).
  • the inference engine 330 may be modified by field training data 338.
  • Field training data 338 may be generated in a similar manner as described with reference to phase 1, but the new data 332 may be used as the training data. In other examples, additional training data may be used to generate field training data 338.
  • FIG. 4 shows example predictions made by a machine learning model compared to ground truth in accordance with examples of the present disclosure.
  • Images 400 and 402 include ultrasound images of a spleen and diagraph acquired from a patient. The ultrasound images in images 400 and 402 are the same. However, image 400 indicates where anatomical features are predicted by a MLM, and image 402 indicates where the anatomical features are located as labeled by a trained observer (e.g., sonographer, radiologist), referred to as “ground truth.”
  • Block 404 indicates where the MLM predicted the location of the spleen tip, and block 410 indicates where the spleen tip is “truly” located based on the labeling by the trained observer.
  • block 406 indicates the predicted location of the spleen and block 408 indicates the predicted location of the diaphragm.
  • Block 412 indicates the “true” location of the spleen and block 414 indicates the “true” location of the diaphragm.
  • Images 416 and 418 include ultrasound images of a liver, diaphragm, and kidney.
  • the ultrasound images in images 416 and 418 are the same.
  • image 416 indicates predictions by a MLM
  • image 418 indicates where the anatomical features are labeled by the trained observer.
  • Block 420 indicates where the MLM predicted the location of the liver
  • block 426 indicates the “true” location of the liver.
  • block 422 indicates the predicted location of the kidney and block 424 indicates the predicted location of the diaphragm.
  • Block 430 indicates the “true” location of the kidney and block 428 indicates the “true” location of the diaphragm.
  • the predictions made in images 400 and 416 may be compared to the ground truth images 402 and 418 during training of the MLM, such as by the process described in FIG. 3. If the predictions made by the MLM in images 400 and 416 are within a desired margin of error of the ground truth images 402 and 418, the MLM may be determined to be trained and ready for validation and/or deployment.
  • FIG. 5 is a flowchart providing an overview of exam completeness analysis in accordance with the examples of the present disclosure.
  • the tasks shown in flowchart 500 may be performed in whole or in part by one or more processor(s), such as completeness processor 170.
  • the processor may receive real-time or near-real-time ultrasound data, such as a cineloop of 2D images or 3D images.
  • the ultrasound images may be provided from a scan converter, such as scan converter 130.
  • the scan converter may include a buffer that temporarily stores the images, and the images may be provided to the processor via the buffer in some examples.
  • the processor may determine whether the images are of high or low quality as indicated by block 504.
  • the signal-to-noise ratio, resolution, structural similarity index, or a combination thereof may be quality metrics that are calculated and used to assess image quality in some examples.
  • the calculated quality metric(s) may be compared to a threshold value to determine whether the images are of high or low quality.
  • a deep machine learning model for image quality classification may be adopted.
  • the input to the deep MLM is the 2D image itself, with the output being a probability of the image being of good quality.
  • the images may be determined to be of high or low quality based on whether the images include complete views of anatomy.
  • one or more MLM may detect anatomical features in the images, but may determine the anatomical features are not complete, or not all of the anatomical features required for analysis are present.
  • an MLM may detect a spleen is present in the image, but a tip of the spleen is not included.
  • the MLM may detect a kidney, but may determine an interface with the liver is not present. If the images are determined to be low quality (e.g., the quality metrics are below a threshold value, incomplete anatomical features), the processor may wait for additional images to be acquired and perform the quality analysis again. In some examples, feedback may be provided to a user (e.g., text or graphics on a display, such as display 138), indicating new images are required.
  • a user e.g., text or graphics on a display, such as display 138
  • the processor may analyze the images (e.g., with MLM, such as MLM 172) to determine a zone being imaged as indicated by block 506. For example, whether the RUQ, LUQ, or SP zone is being imaged in a FAST exam. Other zones may be applicable to different exams (e.g., chambers of the heart may be different zones for a cardiac exam).
  • the processor may automatically detect what type of exam is being performed. In other examples, the type of exam may be indicated by a user input provided via user interface, such as user interface 124.
  • a zone may be identified in block 506 or a type of exam associated with a particular view of a zone or feature may be identified in block 506. For example, while there may be a cardiac zone identified, a particular exam such as an exam using a 4-chamber view or a 2 chamber view may be identified within the same region. In an example, zone classification may occur based on feature or partial feature identification, detection and/or segmentation. Each of these particular identifications may be identified as part of block 506. Further, if one zone is identified and a user is at any point in the method 500, the user may independently decide to change the exam they are performing or the view they are identifying. For example, a user may decide to not complete a full exam before deciding to move to a completely different zone. In such case, a new zone may be identified or classified in block 506 and the method 500 would return to block 506 from another method block in method 500 in order to establish an updated exam process based on features identified in the image.
  • the processor may cause a list of “to-do” tasks for
  • the processor may implement one or more MLM for classification of the exam zones based on image features and provide the list of tasks to-do tasks for zone scan completion.
  • the list may be displayed as a prompt to the user or may be constantly displayed.
  • displaying the to-do task list may be dependent upon zone classification algorithm since list of tasks varies from zone to zone. In other examples, this feature may be offered independent of the zone classification algorithm when a user provides an input via the user interface to select a zone or particular exam to be performed.
  • zone information scan also be specified through a scan protocol sequence selected by the user or provided to the device via a remote user or system.
  • a scan protocol sequence medical standards for certain exams may dictate a specific order in which zones of a subject are scanned.
  • the present techniques enable a particular exam protocol and its associated list of tasks to be provided for display and completeness assessment.
  • Example lists of tasks to be completed for each zone for a FAST exam are provided in Table 1.
  • the task “Need Volume Fanning” can be shown as a to-do item when 2D image sequences are acquired.
  • volume fanning may be performed without any probe movements for 3D acquisitions as will be described in more detail with reference to block 514.
  • the processor may detect and score task completion.
  • the processor may use one or more MLM, such as MLM 172.
  • MLM 172 MLM 172.
  • the scoring/classifi cation algorithm could be based upon rule-based approach (using output of free fluid and anatomy detection algorithms) and/or a MLM trained specifically to classify or score tasks completion. Based on the analysis, the processor may updates progress of task completion on the display, as illustrated in FIG 6.
  • the method 500 may return to 506 for zone classification and a new list of tasks may be displayed in block 508 replacing the previously listed display tasks.
  • any partially completed exam may have its task completions stored in a memory such that a user may resume the exam or switch between zones and the previous status of tasks to display may be reloaded and displayed based on the detected zone being assessed.
  • a system may include a memory with which to store the status of a completed task in a set of tasks, where the completed task is and/or the set of tasks is stored as being associated with at least one of the zone or the anatomical feature identified.
  • the processor may auto-record images acquired by an ultrasound imaging system (e.g., ultrasound imaging system 100) as indicated by box 512.
  • ultrasound systems typically include a buffer that retains the last several seconds of acquisitions (e.g., 5 seconds, 10 seconds), the images are overwritten or discarded if the user does not provide an input indicating the previously acquired images should be saved.
  • the processor may prospectively cause the next several seconds of acquisitions to be saved to memory (e.g., local memory 142) without requiring input from the user.
  • the processor may utilize one or more MLM that performs free fluid detection/segmentation/classifi cation and/or anatomy detection/segmentation/classifi cation and/or image quality classification/scoring to automatically detect key frames and record exam video loop (e.g., cineloop) without the user having to interact with the user interface.
  • the start of an exam may be detected by image quality (e.g., as discussed with reference to block 504) and images containing free fluid and/or relevant anatomy (e.g., as discussed with reference to block 510), whereas end of exam can be triggered by scan completeness algorithm or manually by the user (e.g., as described with reference to blocks 516 and 518).
  • the scan completeness can be triggered immediately without checking for other anatomical features. These aspects may reduce a number of manual interactions required during an exam. This may allow users to focus on analyzing images in real time (e.g., free-fluid exploration in a FAST exam) and/or reduce the risk of users forgetting to save a key image for review after the exam.
  • the saved images can be used for various purposes such as outbound reports and FAST exam summaries in the user interface pipeline to enhance the user experience.
  • Block 514 may be performed by the processor when the ultrasound images are a 3D acquisition.
  • the processor may utilize MLM that perform free fluid detection/segmentation/classification, anatomy detection/segmentation/classification, partial anatomy detection/classification/scoring, and image quality classification/scoring to capture a complete zone without the user manipulating the probe (e.g., probe 112) and/or warning the user that a full zone cannot be scanned from the current position of the probe. In addition, the user can be informed if free fluid has been detected as being present.
  • block 514 may reduce or eliminate the need for manual volume fanning.
  • there may be key imaging location that can be used to acquire a complete volume scan to perform a complete exam (e.g., all zones or all tasks within the zone may be completed) without any probe movements.
  • a key imaging location in a FAST exam may be a probe location where the diaphragm, liver, and kidney are visible in a single image.
  • the processor may analyze the 3D volume imagery and provide an output that indicates where a complete zone can be scanned from imaging point, warning user that azone scan cannot be completed from the current probe location. If a complete exam is possible from the current probe position, the processor may cause the ultrasound imaging system to prompt the user to keep the probe stationary at this location and the ultrasound system automatically completes the scan.
  • Block 514 may be performed responsive to a user input or though live MLM that process 3D data in real-time.
  • This MLM can be a rule-based or statistical analysis-based approach that makes use of outputs of anatomy and image quality classification algorithms or can be a standalone MLM that provides a binary flag or a confidence score that a complete scan can be acquired from this imaging point.
  • the images are not shown on display during the exam and the processor may cause the ultrasound imaging system to merely provide a report to the user about the contents of the 3D data and/or prompt the user to place the probe in another location.
  • the processor may use MLM to perform free fluid detection/segmentation/classification anatomy detection/segmentation/classifi cation, partial anatomy detection/classification/scoring, image quality classification/scoring, and zone detection algorithms to classify/score zone scan completeness as indicated by block 516.
  • This scoring/classification algorithm could be based upon rule-based approach (e.g., a number of tasks completed out of a total number of tasks assigned for scoring), statistical analysis, or MLM that can classify or score zone scan completeness based on image features computed by one or more MLM.
  • the MLM-based features computation that enable zone classification and classification/scoring of zone scan completeness provide feedback to the user as the user is scanning, which may reduce or eliminate the need for input from an expert.
  • the user interface features associated with block 514 may include classification into complete/incomplete and display a scan completeness score or scan meter that keep getting updated as the scan progresses. For example, text including “Complete” or “Incomplete” may be provided on a display. In another example, a status bar, area, or circle may gradually be filled in as the exam progresses. In a further example, text indicating a percentage completeness or score may be provided.
  • the processor may provide exam completion related data at the end of exam.
  • the data may be saved as a complete/incomplete flag as part of the exam or scan completeness score can be saved as part of the exam, which may be saved, at least temporarily to a memory of the imaging system, such as local memory 142.
  • the data, along with the exam data may be transferred from the ultrasound imaging system to another computing system, such as a PACS system.
  • Saving/documenting the scan completeness score/status may be used for filtering exams that need to be verified by an expert. For example, scan completeness scores may be compared to a threshold value. Exams having scan completeness scores equal to or above the threshold value may not be reviewed. In some applications, filtering which exams require expert review may reduce the experts’ workloads. Additionally or alternatively, the completeness scores may be used to provide automated feedback to training/novice users and to report out for the exam.
  • one or more of the various completeness scores may be calculated based one or more rules. For example, a scan completeness score may be based on a percentage of tasks completed (e.g., if 4 out of 5 required tasks are completed, the completeness score may be 80%). In some examples, one or more of the completeness scores may be based on confidence scores provided by the MLM. A confidence score is an output of the MLM that indicates a calculated accuracy of the prediction made by the MLM.
  • the completeness score for the task may be 90%.
  • a task may not be considered complete unless the confidence score is equal to or above a threshold value (e.g., 70%, 80%, 90%).
  • one or more of the completeness scores may be an average or weighted average of the confidence scores.
  • a completeness score for a zone may be based, at least in part, on an average of the confidence score associated with each task.
  • a task that requires multiple images to complete may have a completeness score that is an average of the confidence score for each image associated with the task.
  • FIG. 6 shows an example of a display providing a visual indication of tasks to be completed and completed in accordance with examples of the present disclosure.
  • Display 600 may be included in display 138 in some examples, Display 600 provides an ultrasound image 601 acquired by an ultrasound imaging system (e.g., imaging system 100). Display 600 further provides a list 602 of to- do tasks. The list 602 may be based on a detected exam type and/or zone detected. As the exam progresses as indicated by arrow 603, the display 600 may alter the visual characteristics of list 602 to indicate which tasks have been completed. In the example shown in FIG.
  • the tasks that have been completed 604 in the list 602 are displayed in a different color (e.g., green) than the tasks that have not yet been completed 606 in the list 602.
  • green e.g., green
  • completed tasks may “disappear” or may be crossed out.
  • the display 600 may continue to alter the visual characteristics of list 602 to indicate which tasks have been completed.
  • the “Free fluid” task is completed, and the next task in the list 602 is the task “Bladder”.
  • the scan is deemed complete without the need to continue with the remaining task(s) in the list 602.
  • a visual indication of “scan completed” or similar may be displayed upon the detection of free fluid 608.
  • FIG. 7 is a flow chart of a method according to examples of the present disclosure.
  • the method 700 may be performed by an ultrasound imaging system, such as imaging system 100.
  • the method 700 may be performed in whole or in part by one or more processors, such as completeness processor 170 and/or graphics processor 140.
  • the ultrasound images may be acquired with an ultrasound probe, such as ultrasound probe 112.
  • the ultrasound images may include one or more 2D images.
  • the ultrasound images may include one or more 3D images.
  • the ultrasound images may include a combination of 2D and 3D images.
  • “determining with at least one processor, whether one or more anatomical features are included in the ultrasound images” may be performed.
  • the processor may include a completeness processor, such as completeness processor 170.
  • the processor may implement one or more machine learning models, such as MLM 172 to make the determination.
  • the processor may implement one or more image processing techniques (e.g., image segmentation) in addition to or instead of a machine learning model.
  • determining a status of a task may be performed by the processor. In some examples, the determining may be based on the anatomical features included in the ultrasound images. For example, as discussed with reference to block 508 and 510 of FIG. 5.
  • providing on a display a visual indication of the status may be performed.
  • the display may include display 138.
  • the processor may provide display data to the display based on the status of the task, and the display provides the visual indication of the status based on the display data.
  • the display also provides one or more of the ultrasound images.
  • the visual indication of the task and/or its status is provided at least partially overlaid on the image as shown in FIG. 6 and as discussed with reference to block 508 in FIG. 5.
  • method 700 may further include determining a zone within the subject where the ultrasound images were acquired and providing a second visual indication of a set of tasks based on the zone.
  • the ultrasound imaging system may include a user interface, such as user interface 124, that is configured to receive an input from the user, and the input indicates an exam type, a zone of the subject, or a combination thereof.
  • a completed task of the set of tasks is displayed differently than an uncompleted task of the set of tasks. For example, as shown in FIG. 6, the completed task is a different color than the uncompleted task.
  • method 700 may further include determining whether the ultrasound images are high quality or low quality. In some examples, it may be performed prior to determining whether the anatomical features are included. In some examples, the quality may be determined based on one or more quality factors (e.g., signal-to-noise). In some examples, determining the images are high quality or low quality may be based, at least in part, on whether the anatomical features are included. In some examples, a MLM may be used to determine the quality of the images. In some examples, the quality may be determined as described with reference to block 504 in FIG. 5.
  • quality factors e.g., signal-to-noise
  • method 700 may include saving to a memory the ultrasound images, future acquired ultrasound images, or a combination thereof.
  • the ultrasound images may be saved to local memory 142.
  • the images may be saved automatically as discussed with reference to block 512 in FIG. 5.
  • a MLM may be used to determine when to save the ultrasound images.
  • the ultrasound images include a three-dimensional (3D) dataset.
  • the method 700 may further include determining, based on at least one ultrasound image (e.g., one or more images in the 3D data set, or a 2D image acquired prior to obtaining a 3D dataset), whether the task, a set of tasks, or a combination thereof can be completed by acquiring ultrasound images from a current location of the ultrasound probe. In some examples the determination may be made, at least in part, using a MLM.
  • method 700 may further include providing a prompt via a user interface to change a location of the ultrasound probe. For example, as described with reference to block 514 in FIG. 5.
  • method 700 may further include computing a score indicating a degree of completeness of the task, a degree of completeness of an exam, or a combination thereof.
  • the score is based, at least in part, on a confidence score provided by a MLM.
  • the score indicating a degree of completeness of the exam is based, at least in part, on a number of tasks completed out of a total number of tasks.
  • FIG. 8 is a flow chart of a method according to other examples of the present disclosure.
  • features detected in ultrasound imagery may be relied on to detect anatomical features including free fluid, and the FAST exam may be defined as complete once free fluid is assessed without detecting all other anatomical features (i.e. liver, kidney and so on) for the respective zone.
  • this can trigger “scan complete” without completing the to-do list (e.g., “Bladder” as in FIG. 6).
  • ultrasound images from a subject are acquired.
  • the ultrasound images may be acquired with an ultrasound probe, such as ultrasound probe 112.
  • the ultrasound images may include one or more 2D images.
  • the ultrasound images may include one or more 3D images.
  • the ultrasound images may include a combination of 2D and 3D images.
  • a status of a current task is determined and an indication of the status may be displayed.
  • the processor may include a completeness processor, such as completeness processor 170.
  • the processor may implement one or more machine learning models, such as MLM 172 to make the determination.
  • the processor may implement one or more image processing techniques (e.g., image segmentation) in addition to or instead of a machine learning model.
  • the determining of the current task may be based on the anatomical features included in the ultrasound images. For example, as discussed with reference to block 508 and 510 of FIG. 5.
  • a visual indication of the status may be displayed.
  • the display may include display 138.
  • the processor may provide display data to the display based on the status of the task, and the display provides the visual indication of the status based on the display data.
  • the display also provides one or more of the ultrasound images.
  • the visual indication of the task and/or its status is provided at least partially overlaid on the image as shown in FIG. 6 and as discussed with reference to block 508 in FIG. 5.
  • method 800 may further include determining a zone within the subject where the ultrasound images were acquired and providing a second visual indication of a set of tasks based on the zone.
  • the ultrasound imaging system may include a user interface, such as user interface 124, that is configured to receive an input from the user, and the input indicates an exam type, a zone of the subject, or a combination thereof.
  • a completed task of the set of tasks is displayed differently than an uncompleted task of the set of tasks. For example, as discussed previously in connection with FIG. 6, the completed task is a different color than the uncompleted task.
  • the machine learning algorithms implemented by the processor are configured to determine whether free fluid is included in the ultrasound image. Further, the machine learning algorithms implemented by the processor may be configured to specifically detect free fluid while differentiating from other contained fluids, such as cysts.
  • method 800 may further include determining whether the ultrasound images are high quality or low quality. In some examples, it may be performed prior to determining whether the anatomical features are included. In some examples, the quality may be determined based on one or more quality factors (e.g., signal-to-noise). In some examples, determining the images are high quality or low quality may be based, at least in part, on whether the anatomical features are included. In some examples, a MLM may be used to determine the quality of the images. In some examples, the quality may be determined as described with reference to block 504 in FIG. 5.
  • quality factors e.g., signal-to-noise
  • method 800 may include saving to a memory the ultrasound images, future acquired ultrasound images, or a combination thereof.
  • the ultrasound images may be saved to local memory 142.
  • the images may be saved automatically as discussed with reference to block 512 in FIG. 5.
  • a MLM may be used to determine when to save the ultrasound images.
  • the ultrasound images include a three-dimensional (3D) dataset.
  • the method 800 may further include determining, based on at least one ultrasound image (e.g., one or more images in the 3D data set, or a 2D image acquired prior to obtaining a 3D dataset), whether the task, a set of tasks, or a combination thereof can be completed by acquiring ultrasound images from a current location of the ultrasound probe. In some examples the determination may be made, at least in part, using a MLM.
  • method 800 may further include providing a prompt via a user interface to change a location of the ultrasound probe. For example, as described with reference to block 514 in FIG. 5.
  • an exam completion may be triggered (END in FIG. 8). This may include, for example, a visual indication of completion of the exam on the display. As discussed previously, remaining tasks (if any) need not be completed. [0111] In the case where free fluid is not found (No at 804), the method 800 proceeds to block
  • the processor determines wherein the final task among the list of tasks of the exam has been completed.
  • an exam completion may be triggered (END in FIG. 8). This may include, for example, a visual indication of completion of the exam on the display.
  • the processor generates display data updating the displayed list of tasks on the displayed. An example of this is described above in connection with FIG. 6.
  • the method 800 After updating the list of tasks, the method 800 returns to block 802 to acquire an ultrasound image from the subject for a next task of the exam. The exam continues in this manner until either free fluid is detected or the final task is completed.
  • any ultrasound exam that has a set of standard images, videos, measurements, or a combination thereof, associated with the exam may utilize the features of the present disclosure.
  • the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein.
  • the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
  • processors described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention.
  • the functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
  • ASICs application specific integrated circuits
  • the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

La présente invention concerne un système d'imagerie ultrasonore qui comprend une sonde ultrasonore conçue pour acquérir une image ultrasonore provenant d'un sujet, un dispositif d'affichage conçu pour fournir l'image ultrasonore, et un processeur. Le processeur est conçu pour afficher sur le dispositif d'affichage un ensemble de tâches à accomplir pendant un examen, recevoir l'image ultrasonore, déterminer si une ou plusieurs caractéristiques anatomiques figurent dans l'image ultrasonore, déterminer un état d'une tâche actuelle parmi l'ensemble de tâches sur la base des caractéristiques anatomiques figurant dans l'image ultrasonore, et fournir des données d'affichage au dispositif d'affichage sur la base de l'état de la tâche actuelle, le dispositif d'affichage étant en outre conçu pour fournir une indication visuelle de l'état sur la base des données d'affichage. Les caractéristiques anatomiques comprennent un fluide libre, et le processeur est en outre conçu pour indiquer l'achèvement de l'examen lors d'une détermination du fait que le fluide libre figure dans l'image ultrasonore.
PCT/EP2024/079083 2023-10-26 2024-10-16 Systèmes et procédés de dépistage par imagerie Pending WO2025087746A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363545805P 2023-10-26 2023-10-26
US63/545,805 2023-10-26

Publications (1)

Publication Number Publication Date
WO2025087746A1 true WO2025087746A1 (fr) 2025-05-01

Family

ID=93213524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/079083 Pending WO2025087746A1 (fr) 2023-10-26 2024-10-16 Systèmes et procédés de dépistage par imagerie

Country Status (1)

Country Link
WO (1) WO2025087746A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US20160081659A1 (en) * 2014-09-24 2016-03-24 General Electric Company Method and system for selecting an examination workflow
US20180092629A1 (en) * 2016-09-30 2018-04-05 Toshiba Medical Systems Corporation Ultrasound diagnosis apparatus, medical image diagnosis apparatus, and computer program product
US20200367859A1 (en) * 2019-05-22 2020-11-26 GE Precision Healthcare LLC Method and system for ultrasound imaging multiple anatomical zones
US20230320694A1 (en) * 2022-04-12 2023-10-12 Koninklijke Philips N.V. Graphical user interface for providing ultrasound imaging guidance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6530885B1 (en) 2000-03-17 2003-03-11 Atl Ultrasound, Inc. Spatially compounded three dimensional ultrasonic images
US6443896B1 (en) 2000-08-17 2002-09-03 Koninklijke Philips Electronics N.V. Method for creating multiplanar ultrasonic images of a three dimensional object
US20160081659A1 (en) * 2014-09-24 2016-03-24 General Electric Company Method and system for selecting an examination workflow
US20180092629A1 (en) * 2016-09-30 2018-04-05 Toshiba Medical Systems Corporation Ultrasound diagnosis apparatus, medical image diagnosis apparatus, and computer program product
US20200367859A1 (en) * 2019-05-22 2020-11-26 GE Precision Healthcare LLC Method and system for ultrasound imaging multiple anatomical zones
US20230320694A1 (en) * 2022-04-12 2023-10-12 Koninklijke Philips N.V. Graphical user interface for providing ultrasound imaging guidance

Similar Documents

Publication Publication Date Title
US12121401B2 (en) Ultrasound imaging system with a neural network for deriving imaging data and tissue information
JP7672398B2 (ja) 画像最適化のためのシステム及び方法
US20210401407A1 (en) Identifying an intervntional device in medical images
EP4125606B1 (fr) Système et procédé pour l'imagérie et la determination d'un tissu adipeux épicardique
WO2021099171A1 (fr) Systèmes et procédés de criblage par imagerie
US12396702B2 (en) Systems, methods, and apparatuses for quantitative assessment of organ mobility
US20240119705A1 (en) Systems, methods, and apparatuses for identifying inhomogeneous liver fat
US12422548B2 (en) Systems and methods for generating color doppler images from short and undersampled ensembles
US20240173007A1 (en) Method and apparatus with user guidance and automated image setting selection for mitral regurgitation evaluation
EP4210588B1 (fr) Dispositfs et procédés pour la mesure de raideur du coeur
WO2025087746A1 (fr) Systèmes et procédés de dépistage par imagerie
WO2024013114A1 (fr) Systèmes et procédés de criblage d'imagerie
EP4320587B1 (fr) Systèmes, méthodes et appareils pour identifier la graisse hépatique inhomogène
WO2025124940A1 (fr) Systèmes et procédés de dépistage par imagerie
WO2025098957A1 (fr) Systèmes et procédés d'évaluation de balayages échographiques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24794017

Country of ref document: EP

Kind code of ref document: A1