WO2021099171A1 - Systèmes et procédés de criblage par imagerie - Google Patents
Systèmes et procédés de criblage par imagerie Download PDFInfo
- Publication number
- WO2021099171A1 WO2021099171A1 PCT/EP2020/081548 EP2020081548W WO2021099171A1 WO 2021099171 A1 WO2021099171 A1 WO 2021099171A1 EP 2020081548 W EP2020081548 W EP 2020081548W WO 2021099171 A1 WO2021099171 A1 WO 2021099171A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- organ
- ultrasound
- poses
- ultrasound images
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4245—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/13—Tomography
- A61B8/14—Echo-tomography
- A61B8/145—Echo-tomography characterised by scanning multiple planes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5238—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Definitions
- the present disclosure pertains to imaging systems and methods for monitoring the progress of imaging an object, more specifically, the present disclosure pertains to monitoring the progress of an ultrasound scan of an organ.
- Ultrasound is an important and commonly used imaging modality for disease screening and surveillance.
- Liver disease is a large and growing health challenge worldwide, with an estimated 25% of the population in the western world affected by non-alcoholic fatty liver disease, which can lead to cirrhosis, fibrosis and liver cancer (HCC).
- HCC liver cancer
- AASLD American Association for the Study of Liver Disease
- HBV Hepatitis B Virus
- the minimum size for a clinically significant liver lesion is 10mm diameter, which means that complete screening coverage of the liver is achieved only when images with a spacing of not more than 10mm are obtained throughout the liver.
- the complexity of liver anatomy and the resulting need for scanning the liver from multiple view directions (e.g., long-axis and transverse) and through multiple imaging windows it is difficult for a sonographer to assess which parts of the liver have or have not been imaged in sufficient detail to capture 10mm lesions.
- it is not possible to visualize the entire liver with ultrasound, and substantial coverage gaps may exist. These patients should be referred to MRI for surveillance, but the sonographer may not be aware of the coverage gaps.
- the present disclosure describes image data processing, visualization, feedback and guidance for imaging screening, for example, ultrasound screening of the liver.
- the disclosure describes providing an organ model of an organ to be imaged, the organ model being volumetric and comprising coordinate axes specific to the organ model, and mapping images obtained from a real-time scan of a patient’s organ.
- the mapping may include obtaining a plurality of current image planes from a real-time scan, comparing the image planes to a population of reference images planes spatially mapped to the organ model, each reference image plane comprising a particular pose within the organ model.
- the comparing step may be accomplished using neural network trained on a plurality of reference images having particular poses with respect to the model, the reference images may be from computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, etc.
- CT computed tomography
- MRI magnetic resonance imaging
- the mapping may further include determining poses of the current image frames, accumulating the poses of the current image frames with respect to the organ model, and generating a rendering of the model showing current coverage of the real-time scan of the organ.
- the systems and methods described herein may keep track of the image poses relative to a model of the target organ, and visualizes the information in ways to provide feedback about which parts of the organ have already been covered, which coverage gaps exist, how to close the coverage gaps, and/or what the overall coverage percentage is.
- the systems and methods described herein may increase the accuracy of ultrasound screening by avoiding coverage gaps, improve the workflow by directing the operator toward closing remaining coverage gaps, and/or provide decision support by indicating whether a referral to another imaging modality is needed.
- An ultrasound imaging system may include an ultrasound probe configured to acquire ultrasound signals for generating ultrasound images of an organ and a processor configured to receive the ultrasound images, for each of the ultrasound images, determine a respective pose of the ultrasound image in reference to an organ model, generate coverage data by determining regions of the organ model covered by the ultrasound images based, at least in part, on the poses, and generate display data corresponding to the coverage data.
- the ultrasound imaging system may further include a display configured to provide imaging guidance to a user by displaying the display data to the user.
- a method may include receiving a plurality of ultrasound images of an organ, determining poses for individual ones of the plurality of ultrasound images, wherein the poses are in reference to an organ model, determining regions of the organ covered by the plurality of ultrasound images based, at least in part, on the poses to generate coverage data, generating display data corresponding to the coverage data, and displaying the coverage data based on the display data.
- a non-transitory computer- readable medium may contain instructions, that when executed, may cause an imaging system to acquire a plurality of ultrasound images of an organ, determine poses for individual ones of the plurality of ultrasound images, wherein the poses are in reference to an organ model, determine regions of the organ covered by the plurality of ultrasound images based, at least in part, on the poses to generate coverage data, generate display data corresponding to the coverage data, and display the coverage data based on the display data.
- FIG. 1 is a block diagram of an ultrasound system in accordance with principles of the present disclosure.
- FIG. 2 is a block diagram illustrating an example processor in accordance with principles of the present disclosure.
- FIG. 3 is a block diagram of a process for training and deployment of a neural network in accordance with the principles of the present disclosure.
- FIG.4 is an illustration of an example organ model of a human liver according to principles of the present disclosure.
- FIG. 5 is an illustration of spatial registration of an ultrasound image to an organ segmentation according to principles of the present disclosure.
- FIG. 6 is an illustration of spatial registration of an organ segmentation to an organ model and determining a pose of an ultrasound image to the organ model according to principles of the present disclosure.
- FIG. 7 is a block diagram of an example of a neural network according to principles of the present disclosure.
- FIG. 8 is a visualization of a calculation for determining coverage of an organ by ultrasound images according to principles of the present disclosure.
- FIG. 9 is an illustration of an example coverage map according to principles of the present disclosure.
- FIGS. 10A, 10B and IOC are illustrations of a voxelation technique for determining organ coverage according to principles of the present disclosure.
- FIG. 11 is an example of a confidence coverage map according to principles of the present disclosure.
- FIG. 12 is an example of a checklist according to principles of the present disclosure.
- FIG. 13 is an example of a coverage map and a graphic representing a location and orientation of an ultrasound probe according to principles of the present disclosure.
- a neural network such as a deep convolutional neural network (CNN) may be trained and deployed to infer ultrasound image frame positions relative to a model of a target organ. The frame positions may be used to create a map of screening “coverage” in the target organ.
- a visualization of the coverage information may be provided to guide the user for performing complete screening exams and/or to recognize when complete coverage is not possible, for example, due to lack of adequate imaging windows.
- CNN deep convolutional neural network
- an organ model S M that is representative of a typical or average shape and size of the organ in a subject population (e.g. adult males) may be generated and a standardized coordinate system for that model may be defined.
- the coordinate system may have an origin at a model centroid with coordinate axes aligned with three anatomical axes of the organ.
- spatially tracked ultrasound images of an organ may be acquired and registered to a computed tomography (CT) scan of the same subject (e.g., patient) in which the same organ (e.g. liver) is segmented.
- CT computed tomography
- a transformation T, , p for image i in patient p may be obtained, which may map the centroid and orientation of the ultrasound images onto subject organ segmentation S p .
- this may be accomplished with the Philips PercuNav fusion imaging system, which features automatic registration of tracked ultrasound with a segmented liver CT scan in the same patient.
- other registration methods and/or ultrasound probe tracking methods may be used.
- the subject organ segmentations S p may be spatially registered to the organ model shape SM, using a potentially non-rigid registration transformation Tp 2 M.
- T Lp and Tp2 M may be combined to compute the pose T sla nd j .p of the spatially registered images Ii, p in the standardized organ model coordinate system as will be described in more detail below.
- pose refers to the position and orientation (e.g., translation and rotation) of an object in three dimensional (3D) space, for example, the position and orientation of an image within an organ model.
- neural network such as a CNN
- the CNN may be trained with the registered spatially tracked ultrasound images to estimate the ultrasound image pose in standardized organ model coordinates T s tandj,p, using acquired images I Lp as inputs.
- the CNN may be capable of estimating the pose Tstandj.p of images acquired freehand without a probe tracking system and/or co-registration with an image from another modality (e.g., CT, MRI).
- the acquired images may be pre-processed to make the images more suitable for input to the trained CNN.
- the images may be cropped, scaled, subsampled, and/or edge enhanced.
- the CNN may take the pre-processed images as input and provide the pose estimates relative to the standardized organ model as output.
- all pose estimates provided by the trained CNN may be accumulated and processed for spatial filtering, calculation of coverage percentage, and/or coverage gaps.
- the output may be rendered for display. For example, the organ model with highlighting of an area that has not yet been sufficiently imaged may be displayed. The output may be provided on a display that is either part of the ultrasound imaging system or separate.
- FIG. 1 shows a block diagram of an ultrasound imaging system 100 constructed in accordance with the principles of the present disclosure.
- An ultrasound imaging system 100 may include a transducer array 114, which may be included in an ultrasound probe 112, for example an external probe or an internal probe such as an intravascular ultrasound (IVTJS) catheter probe.
- the transducer array 114 may be in the form of a flexible array configured to be conformally applied to a surface of subject to be imaged (e.g., patient).
- the transducer array 114 is configured to transmit ultrasound signals (e.g., beams, waves) and receive echoes responsive to the ultrasound signals.
- transducer arrays may be used, e.g., linear arrays, curved arrays, or phased arrays.
- the transducer array 114 can include a two dimensional array (as shown) of transducer elements capable of scanning in both elevation and azimuth dimensions for 2D and/or 3D imaging.
- the axial direction is the direction normal to the face of the array (in the case of a curved array the axial directions fan out)
- the azimuthal direction is defined generally by the longitudinal dimension of the array
- the elevation direction is transverse to the azimuthal direction.
- the transducer array 114 may be coupled to a microbeamformer
- the microbeamformer 116 may control the transmission and reception of signals by active elements in the array 114 (e.g., an active subset of elements of the array that define the active aperture at any given time).
- the microbeamformer 116 may be coupled, e.g., by a probe cable or wirelessly, to a transmit/receive (T/R) switch 118, which switches between transmission and reception and protects the main beamformer 122 from high energy transmit signals.
- T/R switch 118 and other elements in the system can be included in the ultrasound probe 112 rather than in the ultrasound system base, which may house the image processing electronics.
- An ultrasound system base typically includes software and hardware components including circuitry for signal processing and image data generation as well as executable instructions for providing a user interface.
- the transmission of ultrasonic signals from the transducer array 114 under control of the microbeamformer 116 is directed by the transmit controller 120, which may be coupled to the T/R switch 118 and a main beamformer 122.
- the transmit controller 120 may control the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 114, or at different angles for a wider field of view.
- the transmit controller 120 may also be coupled to a user interface 124 and receive input from the user's operation of a user control.
- the user interface 124 may include one or more input devices such as a control panel 152, which may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
- a control panel 152 may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
- the partially beamformed signals produced by the microbeamformer 116 may be coupled to a main beamformer 122 where partially beamformed signals from individual patches of transducer elements may be combined into a fully beamformed signal.
- microbeamformer 116 is omitted, and the transducer array 114 is under the control of the beamformer 122 and beamformer 122 performs all beamforming of signals.
- the beamformed signals of beamformer 122 are coupled to processing circuitry 150, which may include one or more processors (e.g., a signal processor 126, a B-mode processor 128, a Doppler processor 160, and one or more image generation and processing components 168) configured to produce an ultrasound image from the beamformed signals (i.e., beamformed RF data).
- processors e.g., a signal processor 126, a B-mode processor 128, a Doppler processor 160, and one or more image generation and processing components 168, configured to produce an ultrasound image from the beamformed signals (i.e., beamformed RF data).
- the signal processor 126 may be configured to process the received beamformed RF data in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation. The signal processor 126 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination.
- the processed signals (also referred to as I and Q components or IQ signals) may be coupled to additional downstream signal processing circuits for image generation.
- the IQ signals may be coupled to a plurality of signal paths within the system, each of which may be associated with a specific arrangement of signal processing components suitable for generating different types of image data (e.g., B-mode image data, Doppler image data).
- the system may include a B-mode signal path 158 which couples the signals from the signal processor 126 to a B-mode processor 128 for producing B-mode image data.
- the B-mode processor can employ amplitude detection for the imaging of structures in the body.
- the signals produced by the B-mode processor 128 may be coupled to a scan converter 130 and/or a multiplanar reformatter 132.
- the scan converter 130 may be configured to arrange the echo signals from the spatial relationship in which they were received to a desired image format. For instance, the scan converter 130 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal or otherwise shaped three dimensional (3D) format.
- the multiplanar re formatter 132 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image (e.g., a B-mode image) of that plane, for example as described in U.S. Pat. No. 6,443,896 (Detmer).
- the scan converter 130 and multiplanar reformatter 132 may be implemented as one or more processors in some embodiments.
- a volume Tenderer 134 may generate an image (also referred to as a projection, render, or rendering) of the 3D dataset as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.).
- the volume Tenderer 134 may be implemented as one or more processors in some embodiments.
- the volume Tenderer 134 may generate a render, such as a positive render or a negative render, by any known or future known technique such as surface rendering and maximum intensity rendering.
- the system may include a Doppler signal path 162 which couples the output from the signal processor 126 to a Doppler processor 160.
- the Doppler processor 160 may be configured to estimate the Doppler shift and generate Doppler image data.
- the Doppler image data may include color data which is then overlaid with B-mode (i.e. grayscale) image data for display.
- B-mode i.e. grayscale
- the Doppler processor 160 may be configured to filter out unwanted signals (i.e., noise or clutter associated with non-moving tissue), for example using a wall filter.
- the Doppler processor 160 may be further configured to estimate velocity and power in accordance with known techniques.
- the Doppler processor may include a Doppler estimator such as an auto correlator, in which velocity (Doppler frequency) estimation is based on the argument of the lag- one autocorrelation function and Doppler power estimation is based on the magnitude of the lag- zero autocorrelation function.
- Motion can also be estimated by known phase-domain (for example, parametric frequency estimators such as MUSIC, ESPRIT, etc.) or time-domain (for example, cross-correlation) signal processing techniques.
- Other estimators related to the temporal or spatial distributions of velocity such as estimators of acceleration or temporal and/or spatial velocity derivatives can be used instead of or in addition to velocity estimators.
- the velocity and power estimates may undergo further threshold detection to further reduce noise, as well as segmentation and post-processing such as filling and smoothing.
- the velocity and power estimates may then be mapped to a desired range of display colors in accordance with a color map.
- the color data also referred to as Doppler image data, may then be coupled to the scan converter 130, where the Doppler image data may be converted to the desired image format and overlaid on the B-mode image of the tissue structure to form a color Doppler or a power Doppler image.
- output from the scan converter 130 may be provided to a screening processor 170.
- the screening processor 170 may be implemented by one or more processors and/or application specific integrated circuits.
- the screening processor 170 may include any one or more machine learning, artificial intelligence algorithms, and/or multiple neural networks, such as neural network 172.
- the neural network 172 may include a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), an autoencoder neural network, or the like, to estimate the pose of an image with respect to an organ model 174.
- DNN deep neural network
- CNN convolutional neural network
- RNN recurrent neural network
- autoencoder neural network or the like
- the neural network 172 may be implemented in hardware (e.g., neurons are represented by physical components) and/or software (e.g., neurons and pathways implemented in a software application) components.
- the neural network 172 implemented according to the present disclosure may use a variety of topologies and learning algorithms for training the neural network to produce the desired output.
- a software-based neural network may be implemented using a processor (e.g., single or multi-core CPU, a single GPU or GPU cluster, or multiple processors arranged for parallel processing) configured to execute instructions, which may be stored in computer readable medium, and which when executed cause the processor to perform a trained algorithm for identifying the pose of an ultrasound image (e.g., an ultrasound image received from the scan converter 130) with respect to an organ model 174, which may be stored in local memory 142 in some examples.
- the neural network 172 may be implemented, at least in part, in a computer- readable medium comprising executable instructions executed by the screening processor 170.
- the neural network 172 may receive acquired ultrasound images as inputs and determine a pose of each image in relation to the organ model 174. That is, the neural network 172 may determine where in a subject’s organ the ultrasound image was acquired in relation to the organ model 174, the organ model 174 being a model of the same organ as the subject’s organ.
- the screening processor 170 may receive the poses of the images output by the neural network 172.
- the screening processor 170 may further receive the organ model 174 from local memory 142 and ultrasound images from the scan converter 130. Based on the estimated poses provided by the neural network 172, the screening processor 170 may calculate a volume within the organ model 174 that was covered (e.g., imaged) by each ultrasound image and calculate coverage data for all the ultrasound images. In some examples, the screening processor 170 may then generate data for display that illustrates on a visualization of the organ model 174 the areas of the organ that have been covered (e.g., images have been acquired of those portions of the organ).
- the screening processor 170 may generate data for display that illustrates on the visualization of the organ model 174 the areas of the organ that have not been covered.
- the screening processor 170 may generate data for display that illustrates on a visualization of the organ model 174 confidence of coverage. That is, not only may the screening processor 170 determine which areas of the organ have been imaged, but the screening processor 170 may calculate a score indicating how confident the screening processor 170 is that an area of the organ has been sufficiently imaged.
- the screening processor 170 may calculate a total percentage of organ coverage. The calculated percentage may be provided for display to the user. In some examples, the screening processor 170 may generate an imaging checklist for display based on its calculated coverage data. In some examples, the screening processor 170 may determine a location and/or orientation of the ultrasound probe 112 based on the pose information of the last acquired image. Based on the location and/or orientation of the ultrasound probe 112, the screening processor 170 may provide display data for displaying a graphic of an ultrasound probe in relation to the organ model 174. In some examples, the screening processor 170 may further calculate a translation of the probe 112 relative to the organ model 174 to image the gaps found by the screening processor 170. The translation of the probe calculated by the screening processor 170 may be provided to a user via text, images, and/or audio cues.
- the screening processor 170 may pre-process images prior to input into the neural network 172.
- the screening processor 170 may change the dimensions of the images (e.g., cropping, scaling, interpolation, down-sampling, pooling), the color of the images (e.g., scale, hue, resolution), intensity levels of the images (e.g., normalization, histogram equalization), and/or geometric transformations (e.g., rotation, translation, scaling, shearing, non-linear warping).
- the neural network 172 may be trained using any of a variety of currently known or later developed learning techniques to obtain a neural network (e.g., a trained algorithm or hardware-based system of nodes) that is configured to analyze input data in the form of ultrasound images, measurements, and/or statistics and estimate the poses of the images.
- a neural network e.g., a trained algorithm or hardware-based system of nodes
- the neural network 172 may be statically trained. That is, the neural network may be trained with a data set and deployed on the screening processor 170.
- the neural network 172 may be dynamically trained. In these embodiments, the neural network 172 may be trained with an initial data set and deployed on the screening processor 170. However, the neural network 172 may continue to train and be modified based on ultrasound images acquired by the system 100 after deployment of the neural network 172 on the screening processor 170.
- the screening processor 170 may not include a neural network 172 and may instead implement other image processing techniques for pose estimation such as image segmentation, histogram analysis, edge detection or other shape or object recognition techniques.
- the screening processor 170 may implement the neural network 172 in combination with other image processing methods to estimate the pose of ultrasound images in relation to the organ model 174.
- the neural network 172 and/or other elements may be selected by a user via the user interface 124.
- Outputs (e.g., coverage data) from the screening processor 170, the scan converter 130, the multiplanar reformatter 132, and/or the volume Tenderer 134 may be coupled to an image processor 136 for further enhancement, buffering and temporary storage before being displayed on an image display 138.
- the image processor 136 may receive the output of the screening processor 170.
- the output of the scan converter 130 may be provided directly to the image processor 136.
- a graphics processor 140 may generate graphic overlays for display with the images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the graphics processor may be configured to receive input from the user interface 124, such as a typed patient name or other annotations.
- the user interface 144 can also be coupled to the multiplanar reformatter 132 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
- MPR multiplanar reformatted
- the system 100 may include local memory 142.
- Local memory 142 may be implemented as any suitable non-transitory computer readable medium (e.g., flash drive, disk drive).
- Local memory 142 may store data generated by the system 100 including ultrasound images, executable instructions, imaging parameters, training data sets, or any other information necessary for the operation of the system 100.
- the local memory 142 may store an organ model 174, which may be used by the screening processor 170.
- the local memory 142 may include multiple organ models 174 (e.g., liver, kidney, heart).
- the organ model 174 used by the screening processor 170 may be selected by a user via the user interface 124.
- the local memory 142 may provide the organ model 174 to the image processor 136 to generate an image of the organ model 174 for display.
- the organ model 174 may include data corresponding to an organ model image (e.g., 2D or 3D), a shape of the organ model (e.g., wire mesh), and/or an organ model coordinate system (e.g., origin, axes).
- the organ model image may be chosen to be a 3D image of the organ in a subject that is representative of a subject population (e.g. adult male with average organ size).
- the image of the organ model may be an average image of several or many subject organ images that are co-registered and averaged.
- An example method of generating an organ model may be found in Dura E. el al., Probabilistic liver atlas construction, BioMed Eng OnLine (2017) 16: 15.
- the organ model 174 may include a model of the overall organ and one or more sub-models of specific regions within the organ (e.g., an organ model of the liver and a sub -model of the right lobe).
- User interface 124 may include display 138 and control panel 152.
- the display 138 may include a display device implemented using a variety of known display technologies, such as LCD, LED, OLED, or plasma display technology. In some embodiments, display 138 may comprise multiple displays.
- the control panel 152 may be configured to receive user inputs (e.g., exam type, organ model, information calculated by and/or displayed from the screening processor 170).
- the control panel 152 may include one or more hard controls (e.g., buttons, knobs, dials, encoders, mouse, trackball or others). In some embodiments, the control panel 152 may additionally or alternatively include soft controls (e.g., GUI control elements or simply, GUI controls) provided on a touch sensitive display. In some embodiments, display 138 may be a touch sensitive display that includes one or more soft controls of the control panel 152.
- hard controls e.g., buttons, knobs, dials, encoders, mouse, trackball or others.
- soft controls e.g., GUI control elements or simply, GUI controls
- display 138 may be a touch sensitive display that includes one or more soft controls of the control panel 152.
- various components shown in FIG. 1 may be combined.
- image processor 136 and graphics processor 140 may be implemented as a single processor.
- various components shown in FIG. 1 may be implemented as separate components.
- signal processor 126 may be implemented as separate signal processors for each imaging mode (e.g., B-mode, Doppler).
- one or more of the various processors shown in FIG. 1 may be implemented by general purpose processors and/or microprocessors configured to perform the specified tasks.
- one or more of the various processors may be implemented as application specific circuits.
- one or more of the various processors e.g., image processor 136) may be implemented with one or more graphical processing units (GPU).
- GPU graphical processing units
- FIG. 2 is a block diagram illustrating an example processor 200 according to principles of the present disclosure.
- Processor 200 may be used to implement one or more processors and/or controllers described herein, for example, image processor 136 shown in FIG. 1 and/or any other processor or controller shown in FIG. 1.
- Processor 200 may be any suitable processor type including, but not limited to, a microprocessor, a microcontroller, a digital signal processor (DSP), a field programmable array (FPGA) where the FPGA has been programmed to form a processor, a graphical processing unit (GPU), an application specific circuit (ASIC) where the ASIC has been designed to form a processor, or a combination thereof.
- DSP digital signal processor
- FPGA field programmable array
- GPU graphical processing unit
- ASIC application specific circuit
- the processor 200 may include one or more cores 202.
- the core 202 may include one or more arithmetic logic units (ALU) 204.
- ALU arithmetic logic unit
- the core 202 may include a floating point logic unit (FPLU) 206 and/or a digital signal processing unit (DSPU) 208 in addition to or instead of the ALU 204.
- FPLU floating point logic unit
- DSPU digital signal processing unit
- the processor 200 may include one or more registers 212 communicatively coupled to the core 202.
- the registers 212 may be implemented using dedicated logic gate circuits (e.g., flip- flops) and/or any memory technology. In some embodiments the registers 212 may be implemented using static memory.
- the register may provide data, instructions and addresses to the core 202.
- processor 200 may include one or more levels of cache memory
- the cache memory 210 may provide computer- readable instructions to the core 202 for execution.
- the cache memory 210 may provide data for processing by the core 202.
- the computer-readable instructions may have been provided to the cache memory 210 by a local memory, for example, local memory attached to the external bus 216.
- the cache memory 210 may be implemented with any suitable cache memory type, for example, metal-oxide semiconductor (MOS) memory such as static random access memory (SRAM), dynamic random access memory (DRAM), and/or any other suitable memory technology.
- MOS metal-oxide semiconductor
- the processor 200 may include a controller 214, which may control input to the processor
- Controller 214 may control the data paths in the ALU 204, FPLU 206 and/or DSPU 208. Controller 214 may be implemented as one or more state machines, data paths and/or dedicated control logic. The gates of controller 214 may be implemented as standalone gates, FPGA, ASIC or any other suitable technology.
- the registers 212 and the cache 210 may communicate with controller 214 and core 202 via internal connections 220 A, 220B, 220C and 220D.
- Internal connections may implemented as a bus, multiplexor, crossbar switch, and/or any other suitable connection technology.
- Inputs and outputs for the processor 200 may be provided via a bus 216, which may include one or more conductive lines.
- the bus 216 may be communicatively coupled to one or more components of processor 200, for example the controller 214, cache 210, and/or register 212.
- the bus 216 may be coupled to one or more components of the system, such as display 138 and control panel 152 mentioned previously.
- the bus 216 may be coupled to one or more external memories.
- the external memories may include Read Only Memory (ROM) 232.
- ROM 232 may be a masked ROM, Electronically Programmable Read Only Memory (EPROM) or any other suitable technology.
- the external memory may include Random Access Memory (RAM) 233.
- RAM 233 may be a static RAM, battery backed up static RAM, Dynamic RAM (DRAM) or any other suitable technology.
- the external memory may include Electrically Erasable Programmable Read Only Memory (EEPROM) 235.
- the external memory may include Flash memory 234.
- the external memory may include a magnetic storage device such as disc 236.
- the external memories may be included in a system, such as ultrasound imaging system 100 shown in Fig. 1, for example local memory 142.
- the system 100 may be configured to implement a neural network included in the screening processor 170, which may include a CNN, to identify a pose of an ultrasound image acquired from a scan of an organ of a subject in relation to an organ model of the same organ.
- the neural network may be trained with imaging data such as image frames where one or more items of interest are labeled as present.
- Neural network may be trained to recognize images of different locations within an organ (e.g., a transverse or longitudinal slice through the right branch portal vein of the liver).
- a neural network training algorithm associated with the neural network can be presented with thousands or even millions of training data sets in order to train the neural network to determine a confidence level for each measurement acquired from a particular ultrasound image.
- the number of ultrasound images used to train the neural network(s) may range from about 1,000 to 200,000 or more.
- the number of images used to train the network(s) may be increased to accommodate a greater variety of patient variation, e.g., weight, height, age, etc.
- the number of training images may differ for different organs or features thereof, and may depend on variability in the appearance of certain organs or features. For example, the organs of pediatric patients may have a greater range of variability than organs of adult patients.
- FIG. 3 shows a block diagram of a process for training and deployment of a neural network in accordance with the principles of the present disclosure. The process shown in FIG. 3 may be used to train the neural network 172 included in the screening processor 170. The left hand side of FIG. 3, phase 1, illustrates the training of a neural network.
- training sets which include multiple instances of input arrays and output classifications may be presented to the training algorithm(s) of the neural network(s) (e.g., AlexNet training algorithm, as described by Krizhevsky, A., Sutskever, I. and Hinton, G. E. “ImageNet Classification with Deep Convolutional Neural Networks NIPS 2012 or its descendants).
- Training may involve the selection of a starting network architecture 312 and the preparation of training data 314.
- the starting network architecture 312 may be a blank architecture (e.g., an architecture with defined layers and arrangement of nodes but without any previously trained weights) or a partially trained network, such as the inception networks, which may then be further tailored for classification of ultrasound images.
- the starting architecture 312 e.g., blank weights
- training data 314 are provided to a training engine 310 for training the model.
- the model 320 Upon sufficient number of iterations (e.g., when the model performs consistently within an acceptable error), the model 320 is said to be trained and ready for deployment, which is illustrated in the middle of FIG. 3, phase 2.
- the right hand side of FIG. 3, or phase 3, the trained model 320 is applied (via inference engine 330) for analysis of new data 332, which is data that has not been presented to the model during the initial training (in phase 1).
- the new data 332 may include unknown images such as live ultrasound images acquired during a scan of a patient (e.g., cardiac images during an echocardiography exam).
- the trained model 320 implemented via engine 330 is used to classify the unknown images in accordance with the training of the model 520 to provide an output 534 (e.g., pose of the images with relation to an organ model).
- the output 334 may then be used by the system for subsequent processes 340 (e.g., output of a neural network 172 may be used by the screening processor 170 to generate a coverage map).
- the inference engine 330 may be modified by field training data 338. Field training data 338 may be generated in a similar manner as described with reference to phase 1, but the new data 332 may be used as the training data. In other examples, additional training data may be used to generate field training data 338.
- the starting architecture may be that of a convolutional neural network, or a deep convolutional neural network, which may be trained to perform image frame indexing, image segmentation, image comparison, or any combinations thereof.
- the training data 314 may include multiple (hundreds, often thousands or even more) annotated/labeled images, also referred to as training images. It will be understood that the training image need not include a full image produced by an imagining system (e.g., representative of the full field of view of an ultrasound probe or entire MRI volume) but may include patches or portions of images of the organ.
- FIGS. 4-6 illustrate an example organ model and process of training a neural network to determine poses of ultrasound images from an organ in relation to the organ model.
- FIG. 4 is an illustration of an example organ model 400 of a human liver according to principles of the present disclosure.
- the organ model 400 may be used to implement organ model 174 in some examples.
- the organ model 400 may include an image 402 of the organ model (the gray shading within wire mesh 404), an organ model shape (indicated by wire mesh 404) and an organ model coordinate system 406.
- the organ model coordinate system 406 is a three dimensional X-Y-Z-axis coordinate system where each axis is orthogonal to the other axes. The axes are labeled XM odei , YM odei , ZM odei , respectively.
- the organ model 400 image may be chosen to be a 3D image of an organ in a subject that is representative of a subject population (e.g. adult male with average organ size).
- the organ model image 402 may be the average image of several or many subject organ images that are co-registered and averaged using methods known in the art as discussed previously with reference to FIG. 1.
- the organ model shape 404 may be derived from the organ model image 402 by segmenting the organ in the image using methods known in the art.
- the organ model shape 404 may be represented as a segmentation S M .
- the segmentation may include a set of surface points and patches, such as triangles, that are connecting the points.
- the organ model coordinate system 406 may be chosen to be standardized in position of the origin and orientation of the axes XM odei , YM odei , ZM odei .
- an origin 408 can be chosen to be at a centroid of the segmentation SM.
- the coordinate system 406 may be aligned using an anatomical orientation of the organ in the body, e.g. the x-axis X Modei will be aligned with the left-right direction, the y-axis Y Modei with the anterior-posterior direction, and the z-axis ZM odei with the cranio-caudal direction.
- a neural network, such as neural network 172 may be trained to determine poses of ultrasound images of an organ in relation to the coordinate system 406 of the organ model 400.
- a set of training data may be acquired.
- a set of training data may be acquired.
- 3D reference image of a target organ may be acquired from multiple subjects (e.g., N SU bject). In some examples, the number of subjects may be greater than ten (e.g., N SU bject > 10).
- the reference images may be acquired by computed tomography (CT) or magnetic resonance imaging (MRI).
- CT computed tomography
- MRI magnetic resonance imaging
- a number of spatially tracked ultrasound images h ,p may be acquired in the target organ.
- the ultrasound images h ,p may be spatially tracked by an electromagnetic tracking system that tracks the location and orientation of the ultrasound probe in relation to the subject and/or target organ.
- An example of a suitable electromagnetic tracking system that may be used is the PercuNav System manufactured by Philips Healthcare, which may enable US/CT and US/MRI fusion imaging with automatic segmentation of the organ and automatic registration of the spatially tracked ultrasound images with a CT or MRI image of the organ.
- other electromagnetic tracking systems or other spatial tracking systems may be used in other examples.
- the spatially tracked ultrasound images may be spatially registered to a coordinate system of the subject organ segmentation S p , thus yielding for each image p transformations T P that map the centroid and orientation of T , p into the coordinate system of the target organ segmentation S p .
- a spatially tracked ultrasound image h ,p 500 may have a two-dimensional coordinate system 502 having an origin 504 in the center of the image 500 and two orthogonal axes, Yus and Xus.
- the subject organ segmentation S p 506 may have a 3D coordinate system 508 with three orthogonal axesXs, Ys, and Zs.
- the coordinate system 508 may have an origin 510, which may be located at a centroid or a landmark anatomical feature of the target organ.
- the spatial tracking information of the image 500 may be used to generate the transformation Ti ,p that converts the Yus and Xus coordinates of the image 500 to the Xs, Ys, and Zs of the subject organ segmentation 506.
- the subject organ segmentation S p of each subject may be spatially registered to the organ model shape SM.
- Rigid, rigid+scale, affine, or locally deformable registration transformations may be used to calculate the transformation Tp2M.
- Transformation Tp2 M may translate the coordinates Xs, Ys, and Zs of the subject organ segmentation Sp to the XModei, YModei, ZModei coordinates of the organ model SM.
- the transformation may be restricted to 9 degrees of freedom (rigid translate/rotate + scaling along each of the 3 coordinate axes) or 7 degrees of freedom (rigid translate/rotate + uniform scaling along all coordinate axes).
- the spatial registration may be performed manually in some examples by defining the transformation parameters of Tp2 M or by defining corresponding landmarks between Sp and SM, and computing the global transformation Tp2M that accurately or approximately maps the landmarks from one to the other.
- the spatial registration may be performed automatically, for example, using methods such as those described by Dura et al. or by using the model segmentation, the model image, or both, and correspondingly the subject segmentation, subject image, or both, for the registration process.
- each individual subject segmentations that is the subject organ segmentation S p generated for each subject’s target organ, may be mapped to the organ model coordinate system.
- transformations T Lp and Tp2 M may be combined to compute the pose T stand_i, P of the image I Lp in the organ model coordinate system.
- Tp2M is potentially non-rigid in some examples, the algebraic concatenation of the transforms Tp2M X T Lp may potentially be non-rigid as well.
- pose T s tandj, P may be computed as a rigid (e.g., rotate+translate) approximation of the combined Tp2 M X T Lp at the location of the ultrasound image in model coordinates.
- a set of fiducial points ⁇ PI ⁇ on the surface of the ultrasound image may be defined, the points ⁇ PI ⁇ may be transformed into model coordinates using the potentially non-rigid Tp2 M X T pp , yielding the points set ⁇ P2 ⁇ , and the rigid registration of ⁇ PI ⁇ onto ⁇ P2 ⁇ may be computed, for example, using the Singular Value Decomposition (SVD) approach.
- Singular Value Decomposition Singular Value Decomposition
- scaling can be applied to the translation component of T s tandj, P to normalize them.
- the scaling can be chosen to normalize the coordinates inside the organ segmentation to absolute values less than one, for example, by dividing the original coordinates and translation component by a scale factor S that is the maximum absolute value of any of the original (x,y,z) coordinates in the organ model segmentation S M .
- This normalization may be beneficial for the efficiency of training a neural network (e.g., neural network 172).
- the training data may be used to train the neural network to determine poses T s tandj,p of ultrasound images in the organ model coordinates based on the ultrasound image Ii ,p .
- FIG. 7 is a block diagram of an example neural network 700 according to the principles of the present disclosure.
- the neural network 700 may be used to implement neural network 172.
- neural network 700 is a deep convolutional neural network that may include a plurality of intermediate convolutional layers 702, which may each include a number (m) of filters 704.
- the first convolutional layer 702 may receive ultrasound images 701 as an input.
- Each subsequent convolutional layer 702 may receive the output of the previous convolutional layer 702.
- the neural network 700 may include a final fully connected layer 706 configured to receive the output of the final convolutional layer 702.
- the fully connected layer 706 may be followed by two regression layers 708, 710 that receive the output of the fully connected layer 706 and output translational and rotational components of a transformation (e.g., the pose), respectively.
- a transformation e.g., the pose
- the transformation may be a rigid transformation.
- the network 700 may be trained using a batch-wise approach on the task to regress the rigid transformation given an input ultrasound image.
- an input to the network 700 may be an ultrasound image and the output may be a ground truth position of that image in standardized organ model coordinates as described above.
- the network 700 may take a current image and determine the pose in standardized organ coordinates in terms of its translation and rotation component.
- the number of output parameter units may vary depending on how the translation and rotation components are represented (e.g., 3 or 4 parameters), and whether additional parameters such as scaling are estimated (e.g., 1 to 3 additional parameters).
- the network 700 is provided merely as an example and the principles of the present disclosure are not limited to the specific network shown in FIG. 7.
- the fully connected layer 706 may be followed by at least one more fully connected layer that may then be connected to the regression layers 708, 710.
- the two regression layers 708, 710 may be implemented as a single regression layer with correspondingly more parameters to represent the rotation and translation in the single regression layer. Combinations of these alternative structures of the network 700 may also be used.
- the neural network 172 may calculate the poses of ultrasound images of a subject’s organ acquired by the ultrasound imaging system 100 in relation to the organ model 174.
- the screening processor 170 receives the poses Tstandj.p of the ultrasound images calculated by the trained neural network 172.
- the screening processor 170 may use the poses, the ultrasound images, and the organ model 174 to generate information that can be provided to a user, such as an ultrasound technician.
- the information may include data relating to the areas of the organ that have been imaged and/or remain to be imaged.
- ultrasound images could be acquired and the poses of the ultrasound images with relation to the organ model 174 could be determined in a similar manner as described with reference to FIGS. 4-6 when generating the training data set for the neural network 172.
- this would require that a patient have a pre-existing CT or MRI image of the organ, which not all patients have.
- this would require an ultrasound probe tracking system as described above, which increases the expense of the ultrasound imaging system.
- the tracking system may require setup and calibration, which may add to the exam time.
- calculating the individual poses of the ultrasound images to the CT image, calculating the transformation between the CT image and the organ model, and finally calculating the poses of the ultrasound images in relation to the organ model may take longer than a trained neural network calculating poses of the ultrasound images to the organ model.
- FIG. 8 illustrates a visual representation of a calculation performed by the screening processor 170 according to principles of the present disclosure.
- the organ model 174 and a wire frame, patch, or similar representation of the outline of the image area/volume 702 of each ultrasound image frame whose pose was estimated by the neural network 172 are shown placed in the model 174 at the poses estimated by the neural network 172.
- each ultrasound image frame may be a 2D image
- the 2D image represents an image of an imaging plane having a finite thickness.
- each ultrasound image frame covers (e.g., images) some finite volume in the organ.
- the volume within the organ model 174 that is covered by each ultrasound frame may be calculated for each frame.
- the calculation for each frame may be accumulated to generate coverage data of the organ over all acquired frames.
- coverage data it is meant what portions (e.g., regions) of the organ have been imaged by one or more image frames.
- the screening processor 170 may generate display data for providing a visualization to the user of the coverage data (e.g., a coverage map).
- the screening processor 170 may provide the visual representation of the calculation shown in FIG. 8 to a user (e.g., via display 138).
- FIG. 9 illustrates an example coverage map 900 according to principles of the present disclosure.
- the coverage map 900 may include a visualization 902 of the organ model 174.
- a coverage gap 904 is highlighted (e.g., shown in a different color, pattern, intensity, etc. compared to the surrounding organ) on the visualization 902.
- Coverage gap refers to a region that has not yet been imaged based on the calculations of the screening processor 170.
- the highlighting may be a graphical overlay or the voxels representing the coverage gap may be assigned highlighted values for display.
- the coverage map 900 could highlight areas that have been covered, that is already imaged by the user.
- a voxel- based volumetric model of the organ may be used (e.g., voxelation method).
- the volume occupied by the organ model 174 may be divided into cubes (e.g., voxels) of a desired side length (e.g., 0.1 ... 20mm) and assigned “coverage values” of “0” if inside the organ model, and “-1” if outside.
- the dimensions of the voxels may be based on a slice thickness of the image frames in some examples.
- the voxel dimensions may be based on other factors (e.g., resolution of FIG. 10A shows one slice 1000 of voxels 1002 encompassing a portion 1004 of the organ model. For ease of illustration, the cubes are shown in a single dimension.
- FIG. 10B for each ultrasound image pose estimate, the intersection of the frame 1008 with the organ voxels is calculated. Each voxel 1002 inside the organ that is intersected by the frame is marked as “covered” by assigning a different “coverage value” to that voxel, e.g. “+1”.
- the voxel value and/or value increase may be a fractional value, representing the fraction of the voxel 1002 that is intersected by the image frame 1008, 1010.
- the voxel value may be set or increased based on the fraction of the voxel 1002 that was intersected by the image frame 1008, 1010.
- a voxel value intersected by an image frame 1008, 1010 or near an image frame 1008, 1010 may be set or increased based on the distance of the center of that voxel 1002 from the image frame 1008, 1010. This reflects the fact that each ultrasound image frame 1008, 1010 has a finite thickness (e.g. 1 to 3 mm) and that tissue areas outside the immediate center of the image frame 1008, 1010 can also be considered “covered”.
- the coverage gaps may be determined and visualizations of the coverage gaps (or alternately covered areas) may be generated by the screening processor 170.
- the coverage gaps are areas inside the organ model that have not been covered by any ultrasound frame, that is, voxels having voxel values of 0.
- An example of a visualization of a coverage gap was shown in FIG. 9.
- FIG. 11 illustrates an example confidence coverage map 1100 according to principles of the present disclosure.
- the confidence coverage map 1100 may include a visualization 1102 of the organ model 174.
- different regions are highlighted in different manners (e.g., different colors or patterns) to indicate different levels of confidence in the coverage of the organ by the acquired ultrasound images.
- region 1104 is a coverage gap, that is, no image frames have been determined to cover that region of the organ.
- Region 1106 highlights a region where the screening processor 170 has determined it has medium confidence that the organ has been covered.
- the voxels in region 1106 may have voxel values that are equal to or greater than a first threshold value (e.g., 0) but below a second threshold value (e.g., 2).
- Region 1108 highlights a region where the screening processor 170 has determined it has high confidence that the organ has been sufficiently covered. For example, the voxels in region 1108 may meet or exceed the second threshold value.
- the screening processor 170 may determine a quality metric for each ultrasound frame (e.g., resolution, signal-to-noise) and/or each portion of the ultrasound frame. The corresponding quality metric may be applied to each voxel value of the voxels.
- the voxel values relating to image quality metric may be separate from voxel values relating to coverage.
- the voxel values may be a combination of the quality metric and coverage (e.g., sum, weighted sum, product).
- the screening processor 170 may generate a quality coverage map (not shown) that may indicate the quality of the images obtained for different regions of the organ.
- the appearance of the quality coverage map may be similar in appearance to the confidence coverage map 1100 in some examples.
- the quality coverage map may make the user aware that although regions of the organ have been covered by one or more image frames, the quality of those image frames may be subpar.
- the screening processor 170 may provide renderings of frame poses, coverage gaps, and/or coverage confidence separately for images acquired in transvers versus longitudinal views. In some examples, this may be accomplished by classifying each image based on its rotation angle R y around the y-axis (in the anterior-posterior direction) into transverse views (if
- ⁇ 45 degrees) or longitudinal views ( ⁇ R V ⁇ > 45 degrees). Separate voxelated organ models for transverse and longitudinal views may be used to keep track of organ coverage achieved with each of the views in some examples.
- the screening processor 170 may not generate coverage maps to provide the coverage data to a user, or the screening processor 170 may generate additional textual, graphical, or audio data to the user in addition to coverage maps.
- a checklist 1200 may be generated based on the coverage data.
- the checklist 1200 may include specific regions of interest 1202 within the target organ may be created and visualized to guide a user to a complete examination of the target organ according to specific guidelines.
- the checklist 1200 could include scanning the right, left and caudate hepatic lobes, the right hemidiaphragm, and the intrahepatic portion of the inferior vena cava, the hepatic veins, the main portal vein, and the right and left branches of the portal vein for a liver ultrasound exam.
- the checklist 1200 may be automatically checked off 1204 as the screening processor 170 detects that images have been acquired in the respective areas of the organ model 174.
- the organ model 174 may be sub-divided into all the areas that should be examined according to the guidelines in some examples. [091] As noted above, many ultrasound examinations have standard protocols that specific regions and views a user must require.
- Some ultrasound imaging systems have features that walk a user through each step of a standard protocol to ensure all of the required views are acquired in the exam. However, as shown in FIG. 12, the specific regions of interest 1202 need not be checked off in the order presented. Thus, determining the poses of the acquired ultrasound images may allow a user to deviate from an order in a protocol for a given examination while still ensuring that all views required by a standard protocol are acquired. That is, the ultrasound imaging system 100 may automatically “jump” to a step in the protocol based on the determined pose. In some examples, the poses determined by the neural network 172 may allow the ultrasound imaging system 100 to provide the user additional tools necessary to complete the standard protocol (e.g., measurement tools, labels, turn on Doppler mode).
- the ultrasound imaging system 100 may provide the user additional tools necessary to complete the standard protocol (e.g., measurement tools, labels, turn on Doppler mode).
- the ultrasound imaging system 100 may provide calliper tools to the user via the user interface 124. This may be done regardless of whether the user goes through the standard protocol in the designated order or if the user “jumps around” between steps. Thus, determining the poses of the images may allow the user to acquire images as desired and not follow prompts to follow a standard protocol in a specific order.
- the total percentage of organ coverage may be calculated by the screening processor 170 and provided for display as text instead of or in addition to a coverage map.
- the coverage percentage may be determined by calculating the fraction of the organ voxels with a coverage value of at least 1 over the total number of organ voxels in the voxelated organ model.
- the coverage percentage may allow the user to assess whether the ultrasound examination overall was sufficient to scan for significant disease, or whether a referral to another imaging modality (e.g. CT or MRI) is required. For example, the coverage percentage may alert the user that limited scanning windows in a particular patient prohibited acquisition of some required views.
- the poses of the ultrasound images determined by the neural network are determined by the neural network
- the screening processor 170 may be used by the screening processor 170 for other or additional purposes.
- the pose for the latest acquired ultrasound image may be used to determine a location and/or orientation of the ultrasound probe (e.g., probe 112) with relation to the organ being scanned.
- the screening processor 170 may provide display data for a coverage map 1300 with a visualization 1302 of the organ model 174, coverage gap(s) 1304, and a graphic 1306 representing the ultrasound probe in the position and orientation calculated from the determined pose of the ultrasound image.
- display data for directional indicators 1308, 1310 may be provided to help orient the user with respect to the organ of the subject.
- the screening processor 170 may use the calculated coverage gaps and the calculated location and/or orientation of the ultrasound probe to determine a displacement of the probe (e.g., translation, rotation) that may be used to reduce or eliminate one or more coverage gaps.
- the displacement of the probe determined by the screening processor 170 may be provided to the user by animating the graphic 1306 to move in the determined displacement.
- the displacement of the probe may be provided by audible cues via a speaker and/or as text 1312.
- the displacement of the probe may be provided to the user by a combination of methods.
- the user may be able to select what data and how the data is provided by providing inputs via the user interface 124.
- the user may select the exam type (e.g., heart, liver), type(s) of coverage data (e.g., coverage gap, coverage percentage, checklist), probe displacement, and/or combination thereof.
- the user may manipulate the display of the coverage data, for example, by rotating the visualization of the organ model.
- a patient receiving an ultrasound scan may have previously undergone a CT or MRI exam of an organ under examination.
- the CT or MRI image may be registered to the organ model 174 (e.g., by deformable model-based registration techniques) and the neural network 172 may calculate poses of the ultrasound images with respect to the CT or MRI image based on the registration between the organ model 174 and the CT or MRI image.
- the screening processor 170 may provide the user with a visualization of the CT or MRI image rather than the organ model 174 and visualization of the coverage data overlaid on the CT or MRI image.
- the organ model 174 may include multiple organ models (e.g., models of an entire organ, models/sub-models of specific regions of the organ).
- Soft tissue organs such as the liver, might significantly deform during the scanning procedure. This may be due to pressure applied by the ultrasound probe, breathing, and/or other causes.
- organs of patients that previously underwent resection or have tumours arising from the normal tissue e.g., normal liver parenchyma
- the organ model 174 which may be a statistical model generated from one or more subjects as discussed previously.
- a gross registration to an overall organ model may be performed, but as soon as the probe approaches a specific region in the organ, the screening processor 170 may switch from an organ model of the entire organ to an organ model/sub -model of a more specific region of the organ (e.g., a model of the right lobe of the liver). In some examples, this may allow the rigidity or simple affine deformation of the liver to be preserved locally, which may provide higher registration accuracy in some applications.
- outliers in pose determinations may be detected by the neural network 172 and/or screening processor 170.
- the outliers may be used to generate visual data that indicates regions to the user that might be suspicious or regions where the user cannot rely upon the coverage data.
- the regions associated with the poses of the frames that fall above the standard deviation may be indicated to the user (e.g., via highlighting on a coverage map).
- the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein.
- the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
- processors described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention.
- the functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
- ASICs application specific integrated circuits
- the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physiology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Des images d'un organe acquises par un système d'imagerie peuvent être enregistrées spatialement sur un modèle d'organe. Sur la base de l'enregistrement spatial des images, le système d'imagerie peut déterminer quelles régions de l'organe ont ou n'ont pas été imagées. Le système d'imagerie peut fournir à un utilisateur une visualisation des régions couvertes. Par exemple, un rendu du modèle d'organe avec des régions qui n'ont pas été imagées mises en évidence. Dans certains exemples, le système d'imagerie peut fournir un pourcentage de couverture d'imagerie de l'organe. Dans certains exemples, le système d'imagerie peut fournir des instructions à l'utilisateur quant à la manière de déplacer une sonde à ultrasons afin d'imager des régions qui ne sont pas encore couvertes par l'utilisateur.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201962939131P | 2019-11-22 | 2019-11-22 | |
| US62/939,131 | 2019-11-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021099171A1 true WO2021099171A1 (fr) | 2021-05-27 |
Family
ID=73344034
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2020/081548 Ceased WO2021099171A1 (fr) | 2019-11-22 | 2020-11-10 | Systèmes et procédés de criblage par imagerie |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021099171A1 (fr) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115829947A (zh) * | 2021-11-30 | 2023-03-21 | 上海联影智能医疗科技有限公司 | 模型处理设备及方法 |
| US12213774B1 (en) * | 2024-01-02 | 2025-02-04 | nference, inc. | Apparatus and method for locating a position of an electrode on an organ model |
| WO2025052139A1 (fr) * | 2023-09-07 | 2025-03-13 | Intelligent Ultrasound Limited | Appareil et procédé de capture d'image à cible entière |
| EP4595897A1 (fr) * | 2024-02-01 | 2025-08-06 | Esaote S.p.A. | Appareil et procédé pour effectuer des examens d'imagerie diagnostique |
| US12387365B1 (en) * | 2024-07-29 | 2025-08-12 | Anumana, Inc. | Apparatus and method for object pose estimation in a medical image |
| US12402863B2 (en) | 2020-12-18 | 2025-09-02 | Koninklijke Philips N.V. | Ultrasound image-based identification of anatomical scan window, probe orientation, and/or patient position |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6443896B1 (en) | 2000-08-17 | 2002-09-03 | Koninklijke Philips Electronics N.V. | Method for creating multiplanar ultrasonic images of a three dimensional object |
| US6530885B1 (en) | 2000-03-17 | 2003-03-11 | Atl Ultrasound, Inc. | Spatially compounded three dimensional ultrasonic images |
| US20090036775A1 (en) * | 2007-07-31 | 2009-02-05 | Olympus Medical Systems Corp. | Medical guiding system |
| WO2018195946A1 (fr) * | 2017-04-28 | 2018-11-01 | 深圳迈瑞生物医疗电子股份有限公司 | Procédé et dispositif pour l'affichage d'une image ultrasonore et support de stockage |
| US20190200964A1 (en) * | 2018-01-03 | 2019-07-04 | General Electric Company | Method and system for creating and utilizing a patient-specific organ model from ultrasound image data |
-
2020
- 2020-11-10 WO PCT/EP2020/081548 patent/WO2021099171A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6530885B1 (en) | 2000-03-17 | 2003-03-11 | Atl Ultrasound, Inc. | Spatially compounded three dimensional ultrasonic images |
| US6443896B1 (en) | 2000-08-17 | 2002-09-03 | Koninklijke Philips Electronics N.V. | Method for creating multiplanar ultrasonic images of a three dimensional object |
| US20090036775A1 (en) * | 2007-07-31 | 2009-02-05 | Olympus Medical Systems Corp. | Medical guiding system |
| WO2018195946A1 (fr) * | 2017-04-28 | 2018-11-01 | 深圳迈瑞生物医疗电子股份有限公司 | Procédé et dispositif pour l'affichage d'une image ultrasonore et support de stockage |
| US20190200964A1 (en) * | 2018-01-03 | 2019-07-04 | General Electric Company | Method and system for creating and utilizing a patient-specific organ model from ultrasound image data |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12402863B2 (en) | 2020-12-18 | 2025-09-02 | Koninklijke Philips N.V. | Ultrasound image-based identification of anatomical scan window, probe orientation, and/or patient position |
| CN115829947A (zh) * | 2021-11-30 | 2023-03-21 | 上海联影智能医疗科技有限公司 | 模型处理设备及方法 |
| WO2025052139A1 (fr) * | 2023-09-07 | 2025-03-13 | Intelligent Ultrasound Limited | Appareil et procédé de capture d'image à cible entière |
| US12213774B1 (en) * | 2024-01-02 | 2025-02-04 | nference, inc. | Apparatus and method for locating a position of an electrode on an organ model |
| EP4595897A1 (fr) * | 2024-02-01 | 2025-08-06 | Esaote S.p.A. | Appareil et procédé pour effectuer des examens d'imagerie diagnostique |
| US12387365B1 (en) * | 2024-07-29 | 2025-08-12 | Anumana, Inc. | Apparatus and method for object pose estimation in a medical image |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6745861B2 (ja) | リアルタイム超音波イメージングのトリプレーン画像の自動セグメント化 | |
| KR102269467B1 (ko) | 의료 진단 이미징에서의 측정 포인트 결정 | |
| WO2021099171A1 (fr) | Systèmes et procédés de criblage par imagerie | |
| US10362941B2 (en) | Method and apparatus for performing registration of medical images | |
| JP7464593B2 (ja) | 医用画像内でのインターベンションデバイスの識別 | |
| EP2807978A1 (fr) | Procédé et système d'acquisition en 3D d'images ultrasonores | |
| EP2846310A2 (fr) | Procédé et appareil d'enregistrement d'images médicales | |
| CN111683600B (zh) | 用于根据超声图像获得解剖测量的设备和方法 | |
| EP4125606B1 (fr) | Système et procédé pour l'imagérie et la determination d'un tissu adipeux épicardique | |
| US20240273822A1 (en) | System and Method for Generating Three Dimensional Geometric Models of Anatomical Regions | |
| CN106462967B (zh) | 用于超声图像的基于模型的分割的采集取向相关特征 | |
| US20240119705A1 (en) | Systems, methods, and apparatuses for identifying inhomogeneous liver fat | |
| US20220287686A1 (en) | System and method for real-time fusion of acoustic image with reference image | |
| EP4210588B1 (fr) | Dispositfs et procédés pour la mesure de raideur du coeur | |
| WO2024013114A1 (fr) | Systèmes et procédés de criblage d'imagerie | |
| WO2025087746A1 (fr) | Systèmes et procédés de dépistage par imagerie | |
| WO2025124940A1 (fr) | Systèmes et procédés de dépistage par imagerie | |
| Bravo et al. | 3D ultrasound in cardiology |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20804497 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20804497 Country of ref document: EP Kind code of ref document: A1 |