WO2025098957A1 - Systèmes et procédés d'évaluation de balayages échographiques - Google Patents
Systèmes et procédés d'évaluation de balayages échographiques Download PDFInfo
- Publication number
- WO2025098957A1 WO2025098957A1 PCT/EP2024/081115 EP2024081115W WO2025098957A1 WO 2025098957 A1 WO2025098957 A1 WO 2025098957A1 EP 2024081115 W EP2024081115 W EP 2024081115W WO 2025098957 A1 WO2025098957 A1 WO 2025098957A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging data
- ultrasound imaging
- recommendation
- data
- sweep
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/54—Control of the diagnostic device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5207—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5269—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0866—Clinical applications involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
Definitions
- the present disclosure relates to evaluating acquisition of medical images. More specifically, this application relates to evaluating ultrasound image acquisitions, such as sweep acquisitions.
- Various medical imaging modalities can be used for clinical analysis and medical intervention, as well as visual representation of the function of organs and tissues, such as magnetic resonance imaging (MRI), ultrasound (US), or computed tomography (CT).
- Implementations of medical imaging modalities may use one or more protocols, such as imaging protocols for acquiring ultrasound images using sweeps.
- a sweep can refer to acquiring multiple image frames at different physical locations during continuous or substantially continuous movement of the location of the imaging plane (e.g., by physically moving the transducer and/or electronic beamforming). Sweeps may be acquired according to a pattern and/or grid. Imaging protocols can be used, for example, to evaluate a fetus using an ultrasound imaging system.
- the disclosed technology receives ultrasound imaging data acquired by a novice user according to a protocol, which can be a protocol for acquiring ultrasound images using sweeps.
- the disclosed technology can analyze the novice user ultrasound imaging data in comparison to a set of ultrasound imaging data acquired by expert users to determine characteristics of the novice user ultrasound imaging data. For example, the disclosed technology can identify a closest expert acquisition and generate recommendations based on comparing the novice user ultrasound imaging data to the closest expert acquisition.
- the recommendations can include recommendations to improve data quality or to repeat an acquisition.
- Operations of the disclosed technology can be performed using a neural network trained to analyze the novice user ultrasound imaging data in relation to a latent space comprising representations of the expert user ultrasound imaging data.
- an ultrasound imaging system comprising an ultrasound probe configured to acquire ultrasound imaging data and at least one processor in communication with the ultrasound probe.
- the at least one processor is configured to receive the ultrasound imaging data, which corresponds to a set of sweep acquisitions using the ultrasound probe.
- a representation of the received ultrasound imaging data in a latent space is generated.
- the representation of the received ultrasound imaging data in the latent space is compared to a latent distribution of a set of reference ultrasound imaging data represented in the latent space to identify a closest subset of the set of reference ultrasound imaging data.
- At least one recommendation is generated based on a difference between the representation of the received ultrasound imaging data in the latent space and the identified closest subset of the set of reference ultrasound imaging data, the at least one recommendation comprising a recommendation to improve a quality of the received ultrasound imaging data, a recommendation to repeat at least a portion of the set of sweep acquisitions, or both.
- the representation of the received ultrasound imaging data in the latent space is compared to the latent distribution of the set of reference ultrasound imaging data represented in the latent space using a neural network trained using the set of reference ultrasound imaging data.
- the at least one processor is configured to access and/or generate a training dataset comprising the set of reference ultrasound imaging data and train the neural network using the training dataset.
- the at least one processor is configured to receive updated ultrasound imaging data based at least in part on the at least one recommendation and evaluate at least one characteristic of an anatomy in the updated ultrasound imaging data, such as a fetal health characteristics of a fetus.
- the received ultrasound imaging data comprises sensor data captured via at least one of an accelerometer, a magnetometer, a gyroscope, or an electromagnetic localization sensor.
- the difference between the representation of the received ultrasound imaging data in the latent space and the identified closest subset of the set of reference ultrasound imaging data is based, at least in part, on a reconstruction error, a Kullback-Leibler divergence, or both.
- the at least one processor is configured to automatically apply the at least one recommendation to the received ultrasound imaging data.
- the set of reference ultrasound imaging data comprises expert- acquired data or data having characteristics of expert-acquired data, which can be simulated data.
- the set of sweep acquisitions is associated with an ultrasound imaging protocol.
- a non-transitory computer-readable medium carrying instructions that, when executed, cause a processor to execute operations to perform at least one method disclosed herein.
- FIG. 1 is a block diagram of an ultrasound imaging system arranged in accordance with principles of the present disclosure.
- FIG. 2 is a block diagram illustrating a workflow for training a model to evaluate sweep acquisitions, according to principles of the present disclosure.
- FIG. 3 is a display diagram illustrating a mapping of sweep acquisitions in a latent space, according to principles of the present disclosure.
- FIG. 4 is a block diagram illustrating a workflow for applying a trained model to evaluate sweep acquisitions, according to principles of the present disclosure.
- FIG. 5 is a flow diagram illustrating a process for evaluating ultrasound imaging data, according to principles of the present disclosure.
- medical imaging systems such as ultrasound imaging systems
- medical imaging systems can be used in resource-constrained care settings where trained, experienced, and/or expert users may not be readily available.
- a non-expert user such as an untrained, minimally trained, and/or novice user.
- a non-expert user can perform imaging sweeps along a pre-determined grid, which can be performed without accessing the acquired images in real-time (termed “blind sweeps”), or using another protocol.
- a user e.g., a non-expert user
- can perform guided sweeps in which the user follows guidance provided via a device or system to reach a position and/or orientation for capturing a specific view. These sweeps are in contrast to the freehand sweeps that would be performed by a trained user, during which the user can move the transducer as needed to localize anatomical structures of interest.
- Existing systems may have the ability to use imaging data acquired using blind sweeps or guided sweeps, such as for performing gestational estimations (e.g., age, viability, multiple gestation) based on blind sweep ultrasound imaging data.
- Some systems may use one or more artificial intelligence (Al) models to analyze medical imaging data.
- Al artificial intelligence
- non-expert users may not have the ability to achieve or maintain the quality (e.g., sufficient resolution, non-blurry, object of interest fully in frame) and consistency (e.g., consistent spacing of image frames, sufficient number of frames across entire object of interest) necessary to reliably and efficiently apply these existing systems.
- low-quality acquisitions can impact the performance of Al models used for automatic estimation of fetal characteristics (e.g., gestational age, etc.).
- the present disclosure describes systems and related methods for evaluating medical imaging data, such as for evaluating quality of sweep acquisitions (e.g., blind sweeps or guided sweeps) using ultrasound imaging systems.
- the disclosed technology can be implemented using an ultrasound imaging system capable of operating with imaging presets optimized and/or configured for a particular type of imaging, such as obstetric imaging.
- a “sweep” or “sweep acquisition” can refer to one or more operations for using an ultrasound transducer or probe to acquire imaging data, which may be according to a protocol, such as an operation for acquiring imaging data by moving the transducer or probe along one or more predetermined paths relative to an anatomy of a subject.
- a sweep acquisition can follow a grid or other pattern.
- the disclosed technology can include one or more models trained to evaluate medical imaging data to determine a quality of the medical imaging data and/or to generate one or more recommendations. For example, recommendations may be provided to improve quality of the medical imaging data and/or to repeat acquisition of at least a portion of the medical imaging data.
- the disclosed technology includes an Al-based model to assess the quality of the sweeps collected by a non-expert user by comparing them to sweeps collected by experts and/or trained users.
- the model can further generate suggestions to improve the acquired data and/or redo at least a portion of a scan according to the outcome of the quality assessment.
- the disclosed technology includes at least one processing controller that receives a sequence of two-dimensional (2D) ultrasound frames comprising images of a fetus and evaluates the scan quality by comparing the images of the fetus captured by a user using the ultrasound probe with those acquired by expert users. Suggestions for improving the acquired data quality or recapturing the data will be made by this system before using the data for clinical parameter estimations.
- the processing controller can be a standalone system receiving realtime or offline data or integrated into an ultrasound imaging system.
- FIG. 1 is a block diagram of an ultrasound imaging system 100 arranged in accordance with principles of the present disclosure. In the ultrasound imaging system 100 of FIG.
- an ultrasound probe 112 includes a transducer array 114 for transmitting ultrasonic waves and receiving echo information.
- the transducer array 114 can be implemented as a linear array, convex array, a phased array, and/or a combination thereof.
- the transducer array 114 for example, can include a two- dimensional array (as shown) of transducer elements capable of scanning in both elevation and azimuth dimensions for 2D and/or 3D imaging.
- the transducer array 114 can be coupled to a microbeamformer 116 in the probe 112, which controls transmission and reception of signals by the transducer elements in the array.
- the microbeamformer 116 is coupled by the probe cable to atransmit/receive (T/R) switch 118, which switches between transmission and reception and protects the main beamformer 122 from high-energy transmit signals.
- T/R switch 118 and other elements in the system can be included in the ultrasound probe 112 rather than in a separate ultrasound system base.
- the ultrasound probe 112 may be coupled to the ultrasound imaging system via a wireless connection (e.g., WiFi, Bluetooth).
- the transmission of ultrasonic beams from the transducer array 114 under control of the microbeamformer 116 is directed by the transmit controller 120 coupled to the T/R switch 118 and the beamformer 122, which receives input from the user’s operation of the user interface (e.g., control panel, touch screen, console) 125.
- the user interface 125 may include soft and/or hard controls.
- One of the functions controlled by the transmit controller 120 is the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 114, or at different angles for a wider field of view.
- the partially beamformed signals produced by the microbeamformer 116 are coupled via channels 115 to a main beamformer 122 where partially beamformed signals from individual patches of transducer elements are combined into a fully beamformed signal.
- microbeamformer 116 is omitted and the transducer array 114 is coupled via channels 115 to the beamformer 122.
- the system 100 can be configured (e.g., include a sufficient number of channels 115 and have a transmit/receive controller programmed to drive the transducer array 114) to acquire ultrasound data responsive to a plane wave or diverging beams of ultrasound transmitted toward the subject.
- the number of channels 115 from the ultrasound probe may be less than the number of transducer elements of the transducer array 114 and the system can be operable to acquire ultrasound data packaged into a smaller number of channels than the number of transducer elements.
- the beamformed signals are coupled to a signal processor 126.
- the signal processor 126 can process the received echo signals in various ways, such as bandpass filtering, decimation, I and Q component separation, and/or harmonic signal separation.
- the signal processor 126 can also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination.
- the processed signals are coupled to a B-mode processor 128, which can employ amplitude detection for the imaging of structures in the body.
- the signals produced by the B-mode processor 128 are coupled to a scan converter 130 and a multiplanar reformatter 132.
- the scan converter 130 arranges the echo signals in the spatial relationship from which they were received in a desired image format. For instance, the scan converter 130 can arrange the echo signal into a two-dimensional (2D) sectorshaped format, or a pyramidal three-dimensional (3D) image.
- the multiplanar reformatter 132 can convert echoes, which are received from points in a common plane in a volumetric region of the body into an ultrasonic image of that plane, as described in U.S. Pat. No. 6,443,896 (Detmer).
- a volume Tenderer 134 converts the echo signals of a 3D data set into a projected 3D image as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.)
- the 2D or 3D images can be coupled from the scan converter 130, multiplanar reformatter 132, and volume Tenderer 134 to at least one processor 137 for further image processing operations.
- the at least one processor 137 can include an image processor 136 configured to perform further enhancement and/or buffering and temporary storage of imaging data for display on an image display 138.
- the display 138 can include a display device implemented using a variety of display technologies, such as ECD, LED, OLED, or plasma display technology.
- the at least one processor 137 can include a graphics processor 140, which can generate graphic overlays for display with the ultrasound images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the graphics processor 140 receives input from the user interface 125, such as a typed patient name.
- the user interface 125 can also be coupled to the multiplanar reformatter 132 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
- MPR multiplanar reformatted
- the user interface 125 can include one or more mechanical controls, such as buttons, dials, a trackball, a physical keyboard, and others, which may also be referred to herein as hard controls.
- the user interface 125 can include one or more soft controls, such as buttons, menus, soft keyboard, and other user interface control elements implemented for example using touch-sensitive technology (e.g., resistive, capacitive, or optical touch screens).
- touch-sensitive technology e.g., resistive, capacitive, or optical touch screens.
- One or more of the user controls can be co-located on a control panel 124.
- one or more of the mechanical controls can be provided on a console and/or one or more soft controls can be co-located on a touch screen, which can be attached to or integral with the console.
- the display 138 and the user interface 125 can be included in an I/O component, via which outputs are provided by the system 100 and/or inputs are received by the system 100.
- the user interface 125 can provide one or more outputs of the disclosed system, including corrected sequences and/or lists of generated recommendations for improving sweep quality and/or confidence metrics in generated determinations. Additionally or alternatively, information regarding sweep quality and/or characteristics (e.g., sweep number, reason of low-quality data, maximum and minimum probe acceleration/velocity, sweep direction, etc.) can be outputted to the user. In some implementations the one or more outputs of the system can be provided via one or more graphical user interfaces (e.g., via the display 138).
- the at least one processor 137 can also perform functions associated with evaluating medical imaging data, as described herein. For example, the at least one processor 137 can compare received ultrasound imaging data (e.g., non-expert and/or novice acquisitions) to reference ultrasound imaging data (e.g., expert and/or trained-user acquisitions) and generate one or more recommendations based on this comparison, such as a recommendation to improve quality of the received ultrasound imaging data and/or a recommendation to repeat acquisition of at least a portion of the received ultrasound imaging data.
- received ultrasound imaging data e.g., non-expert and/or novice acquisitions
- reference ultrasound imaging data e.g., expert and/or trained-user acquisitions
- Quality of the imaging data can include various characteristics, such as resolution, blurriness, presence/visibility of an object of interest or landmark, contrast, noise, zoom, number of frames, spacing of frames, probe or transducer movement or orientation, and so forth.
- the at least one processor 137 trains, provides, and/or accesses one or more models disclosed herein, such as a neural network and/or other Al model.
- processors described herein can be implemented in a single processor (e.g., a CPU or GPU implementing the functionality of processor 137) or fewer number of processors than described in this example. While an image processor 136 and a graphics processor 140 are described, more or fewer processors can be included in the at least one processor 137, and functions of one or more processors can be combined. In some embodiments, the at least one processor 137 can be hardwarebased (e.g., include multiple layers of interconnected nodes implemented in hardware).
- the at least one processor 137 can be implemented at other processing stages, e.g., prior to the processing performed by the image processor 136, volume Tenderer 134, multiplanar reformater 132, and/or scan converter 130. In some embodiments, the at least one processor 137 can be implemented to process ultrasound data in the channel domain, beamspace domain (e.g., before or after beamformer 122), the IQ domain (e.g., before, after, or in conjunction with signal processor 126), and/or the k-space domain.
- functionality of two or more of the processing components can be combined into a single processing unit and/or divided between multiple processing units.
- the processing units can be implemented in software, hardware, or a combination thereof.
- the at least one processor 137 can include one or more graphical processing units (GPU).
- beamformer 122 can include an application specific integrated circuit (ASIC).
- the at least one processor 137 can be coupled to one or more computer-readable media (e.g., memory 142) included in the system 100, which can be non-transitory.
- the one or more computer- readable media can carry instructions and/or a computer program that, when executed, cause the at least one processor 137 to perform operations described herein.
- a computer program can be stored/distributed on any suitable medium, such as an optical storage medium or a solid- state medium supplied together with or as part of other hardware, but can also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
- embodiments can take the form of a computer program product accessible from a computer- readable medium providing program code for use by or in connection with a computer or any device or system that executes instructions.
- a computer- readable medium can generally be any tangible apparatus that can contain, store, communicate, propagate, and/or transport the program for use by or in connection with the instruction execution device.
- the computer-readable medium can be, for example, without limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, and/or a propagation medium.
- Non-limiting examples of a computer readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskete, a random access memory (RAM), a readonly memory (ROM), a rigid magnetic disk, and/or an optical disk.
- Optical disks can include compact disk read only memory (CD-ROM), compact disk-read/write (CD-R/W), and/or DVD.
- FIG. 2 is a block diagram illustrating a workflow 200 for training a model to evaluate sweep acquisitions, according to principles of the present disclosure.
- the workflow 200 can train an Al model 210 to evaluate received medical imaging data in comparison to reference medical imaging data 205 based on a generated distribution and/or representation of the reference medical imaging data. Based on the evaluation, the trained Al model 210 can be used to generate recommendations, as described herein.
- the workflow 200 can be performed using the system 100 of FIG. 1.
- the workflow 200 begins when reference medical imaging data 205 is received.
- the reference medical imaging data 205 can comprise ultrasound sequences and/or associated data related to sweeps performed by a trained and/or experienced user of an ultrasound system (e.g., an expert user).
- the reference medical imaging data 205 can comprise any data having characteristics of expert-acquired data, including simulated data.
- retrospective blind sweep data can be received comprising ultrasound imaging data acquired by a large number of expert users (e.g., 100 or more expert users).
- the reference medical imaging data 205 can be annotated, labeled, and/or associated with metadata.
- the received data can include annotations for each of a set of sweeps, such as annotations related to a location and/or orientation of one or more sweeps (e.g., horizontal/vertical, right/left/middle, upper/lower middle, and so forth).
- annotations can include gestational age, number of fetuses, demographic data, measurement data, and/or electronic health record (EHR) data.
- EHR electronic health record
- the reference medical imaging data 205 can include supplemental data, such as data acquired using an inertial measurement unit (IMU) and/or using one or more sensors (e.g., an accelerometer, a magnetometer, a gyroscope, and/or an electromagnetic localization sensor). IMU data can be used, for example, to determine probe position, speed, acceleration, direction, location (e.g., start location, stop location), orientation, and so forth.
- the reference medical imaging data 205 can comprise both expert and novice datasets. In these and other implementations, the reference medical imaging data 205 can be labeled and/or classified based on various characteristics, such as a level of skill/experience of the user that acquired the data.
- the reference medical imaging data 205 can additionally or alternatively include original and/or corrected datasets, such as medical imaging data that includes suggested corrections or modifications, annotated fetal gestational parameters or other parameters and/or associated errors, and so forth.
- the reference medical imaging data 205 can be used to generate a training dataset, such as by determining one or more variable values associated with the reference medical imaging data 205 (e.g., contrast, gestational age, probe direction, probe orientation, probe speed, user level of skill/experience, number of fetuses, known medical conditions, fetal measurements, etc.).
- the reference medical imaging data 205 is provided to an Al model 210, which can comprise one or more neural networks.
- the Al model 210 can be a variational autoencoder (VAE) that can be trained based on the reference medical imaging data to determine characteristics of received medical imaging data.
- VAE variational autoencoder
- the Al model 210 comprises an encoder 220, a decoder 225, and one or more modules 230, which can perform operations further described herein.
- the Al model 210 can be trained, for example, to identify characteristics of the reference medical imaging data.
- the Al model 210 can be trained to identify more useful data for retention and/or analysis, and/or to identify less useful data to be discarded, such as data at the beginning and end of sweep (e.g., blank frames, repeated frames, etc.). Other characteristics that can be identified include air pockets, speed (e.g., uniform speed), appropriate start and stop points, direction (e.g., correct or incorrect direction), etc. Additionally or alternatively, the Al model 210 can be trained to derive information from the ultrasound images via optical flow or similar image-based motion tracking methods or using information inputted to it as an output from another signal processing controller utilizing additional data streams (e.g., inertial measurement unit data).
- additional data streams e.g., inertial measurement unit data
- the Al model 210 can comprise any construct that is trained using training data to make predictions or provide probabilities for new data items, whether or not the new data items were included in the training data.
- training data for supervised learning can include items with various parameters and an assigned classification.
- a new data item can have parameters that a model can use to assign a classification to the new data item.
- a model can be a probability distribution resulting from the analysis of training data, such as a likelihood of an n-gram occurring in a given language based on an analysis of a large corpus from that language.
- models and/or associated techniques include, without limitation: neural networks, support vector machines, decision trees, Parzen windows, Bayes, clustering, reinforcement learning, probability distributions, decision trees, decision tree forests, and others. Models can be configured for various situations, data types, sources, and output formats.
- models trained using the workflow 200 can include a neural network with multiple input nodes that receive training datasets.
- the input nodes can correspond to functions that receive the input and produce results. These results can be provided to one or more levels of intermediate nodes that each produce further results based on a combination of lower level node results.
- a weighting factor can be applied to the output of each node before the result is passed to the next layer node.
- the output layer one or more nodes can produce a value classifying the input that, once the model is trained, can be used to assess one or more characteristics of received medical imaging data, such as coordinates for the medical imaging data in a latent space.
- such neural networks can have multiple layers of intermediate nodes with different configurations, can be a combination of models that receive different parts of the input and/or input from other parts of the deep neural network, or are convolutions — partially using output from previous iterations of applying the model as further input to produce results for the current input.
- a model can be trained with supervised learning. Testing data can then be provided to the model to assess for accuracy. Testing data can be, for example, a portion of the entire dataset (e.g., 10%) held back to use for evaluation of the model. Output from the model can be compared to the desired and/or expected output for the training data and, based on the comparison, the model can be modified, such as by changing weights between nodes of the neural network and/or parameters of the functions used at each node in the neural network (e.g., applying a loss function). Based on the results of the model evaluation, and after applying the described modifications, the model can then be retrained to evaluate new data.
- the Al model 210 can be trained to generate one or more distributions of medical imaging data, which can comprise Gaussian distributions and/or representations (e.g., three-dimensional (3D) distributions) of the medical imaging data in a latent space (z) and/or other mappings (e.g., distributions of the data in one or more dimensional spaces).
- the Al model 210 can be trained to generate distributions and/or representations of medical imaging data based on various characteristics of the medical imaging data, such as one or more values of the medical imaging data as compared to a mean (m), median, mode, standard deviation (o), and/or other value associated with a set of medical imaging data (e.g., a set of expert medical imaging data).
- the Al model 210 can determine and/or identify (e.g., using modules 230) one or more values associated with the medical imaging data, such as speed, direction, and/or orientation of the probe, image contrast and/or other image characteristics or parameters, and so forth.
- the Al model 210 can be trained to represent datasets (e.g., sets of images and/or associated data for individual acquisitions) as a point within a distribution (e.g., 300 of FIG. 3).
- the Al model 210 can additionally or alternatively generate reconstructed images 215 based on the representation of the datasets, for example, to compare the representation to the overall distribution of the reference medical imaging data.
- the reconstructed images 215 can be provided by and/or determined using one or more outputs of the decoder 225.
- the Al model 210 generates reconstructed images 215 and determines coordinates in a latent space/mapping/distribution for a corresponding dataset based on characteristics of the reconstructed images 215.
- the encoder 220 and decoder 225 include convolutional neural networks with 3D inputs (stacks of 2D sequences) or recurrent neural networks with unidirectional or bidirectional long short-term memory (LSTM) architecture, etc.
- LSTM long short-term memory
- Such a network can be trained to learn the distribution (p,o) over the latent space (z) representing expert acquired frames (e.g., reference medical imaging data 205).
- the Al model 210 can be trained to generate a set of coordinates (e.g., using modules 230) within the latent space (z) to represent a dataset (e.g., a sweep acquisition or set of sweep acquisitions).
- the Al model 210 can be trained to output a confidence in its representation of an input image. Because an autoencoder network reconstructs the input image (e.g., by generating reconstructed images 215 based on reference medical imaging data 205), the representation of the input image (z) can be assigned a quality or uncertainty value based on the quality of the reconstruction (e.g., by using a loss computed between the input and output images). Additionally or alternatively, a second loss function of a VAE (e.g., Kullback-Leibler (KL) divergence) can be used to determine uncertainty.
- VAE Kullback-Leibler
- the mean KL divergence between the representation of an input sequence and the current trained encodings can indicate whether the input sequence is represented well by the learned encodings (small divergence) or if it is an outlier (large divergence). These uncertainty or divergence values can be used to estimate a confidence for each input. Additionally, confidence can also be computed by observing the attention maps produced by the network. For example, if the attention of the network is on the shadows, blank frames, and other characteristics that represent a novice user acquisition, the network should have higher confidence in its output. Whereas, if the attention maps highlight the organs or other relevant structures, the network should have lower confidence.
- the workflow 200 can be repeated any number of times.
- the workflow 200 can be repeated to retrain the Al model 210 and/or to evaluate the Al model 210 after training.
- a portion (e.g., 10%) of the reference medical imaging data 205 can be held back as testing data, and a testing dataset can be generated using the testing data.
- the Al model 210 when trained using the workflow 200, can be evaluated using the testing dataset to determine an accuracy of the trained Al model 210, and the Al model 210 can be retrained when the accuracy does not exceed a threshold accuracy (e.g., 70%, 80%, 90%, etc.).
- Retraining the Al model 210 can comprise repeating the workflow 200 using the same reference medical imaging data 205 and/or using an expanded dataset, and/or adjusting one or more weights associated with the Al model 210.
- FIG. 3 is a display diagram illustrating a mapping 300 of sweep acquisitions in a latent space, according to principles of the present disclosure.
- the mapping 300 can be a latent distribution and/or a representation of the sweep acquisitions in the latent space.
- the mapping 300 can be generated, for example, according to the workflow 200, such as using the trained Al model 210, and/or using the system 100 of FIG. 1. For example, points within the mapping 300 can be positioned based on reconstructed images 215 generated using the Al model 210.
- the mapping 300 comprises a set of points positioned within the latent space, each point positioned relative to the remaining points based on a distribution generated according to the workflow 200, wherein each point can represent a discrete sweep acquisition within a dataset (e.g., one or more reconstructed images 215 as in FIG. 2). Although a 2D latent space is illustrated, any dimensional space can be used, such as a 3D/multi-dimensional latent space and/or multiple latent spaces.
- the mapping 300 can be used to represent and/or determine a distance between a specific sweep acquisition (represented as a point) and one or more expert sweeps, such as a closest expert sweep within the mapping 300 and/or a mean expert sweep.
- the distance can be used to determine the difference between the specific sweep acquisition and the one or more expert sweeps, which can indicate one or more relative characteristics of the specific sweep acquisition, such as whether the specific sweep acquisition has too much or too little contrast, whether a direction of the sweep is correct or incorrect, whether the sweep acquisition is of sufficient quality (e.g., as compared to a threshold quality), whether the sweep acquisition includes unnecessary frames, and so forth.
- the center 305 of the mapping 300 represents sweep acquisitions having characteristics of an expert sweep acquisition (e.g., based on a mean of one or more characteristics of the sweep acquisition relative to a distribution of expert sweep acquisitions). Dimensions within the latent space of the mapping 300 can be associated with various characteristics of an acquisition, such as an amount of contrast, a quality or confidence metric, a direction/orientation/speed of a probe, and so forth. Accordingly, a distance between the center 305 and a point within the latent space corresponds to a difference between the point (e.g., representing a discrete sweep acquisition) and a mean expert acquisition.
- the upper left portion 310 of the mapping 300 represents sweep acquisitions having a low overall quality and/or confidence, such that points within the upper left portion 310 should be removed and/or the corresponding sweep acquisition should be repeated.
- the upper right portion 315 of the mapping 300 represents sweep acquisitions having an incorrect sweep direction. That is, a point can be positioned in the upper right portion 315 to indicate a difference in the direction in which an ultrasound probe is moving during a corresponding sweep, as compared to mean expert sweeps represented by the center 305 of the mapping 300.
- the lower right portion 320 of the mapping 300 represents sweep acquisitions having insufficient contrast, as compared to the center 305.
- the lower left portion 325 of the mapping 300 represents sweep acquisitions having unnecessary frames (e.g., a first 10 frames of an acquisition), as compared to the center 305.
- mapping 300 can represent any number of characteristics of sweep acquisitions, and the provided examples are merely illustrative.
- the mapping 300 can be used to indicate probe orientation, probe speed, probe direction, image quality, confidence metrics, superfluous or unnecessary data (e.g., image frames), insufficient contact with a probe, and so forth.
- FIG. 4 is a block diagram illustrating a workflow 400 for applying a trained model (e.g., Al model 210 of FIG. 2) to evaluate sweep acquisitions, according to principles of the present disclosure.
- the workflow 400 can be performed using the system 100 of FIG. 1, using an Al model 210 trained according to workflow 200 of FIG. 2, and/or using a mapping 300 of FIG. 3.
- the workflow 400 can use the Al model 210 to determine positions of non-expert sweeps within the mapping 300 and/or to compare the non-expert sweeps to one or more closest expert sweeps within the mapping 300.
- the workflow 400 begins when medical imaging data 405 is received.
- the medical imaging data 405 can comprise sweep acquisitions performed by a non-expert user, such as an untrained or novice user of an ultrasound imaging system.
- the medical imaging data 405 can comprise ultrasound images (e.g., image sequences corresponding to sweeps) and/or associated data, such as sensor data (e.g., captured using an accelerometer, a magnetometer, a gyroscope, and/or electromagnetic localization sensor).
- sensor data e.g., captured using an accelerometer, a magnetometer, a gyroscope, and/or electromagnetic localization sensor.
- the trained Al model 210 can process the medical imaging data 405 to determine various characteristics of the medical imaging data 405, such as characteristics of individual sweep acquisitions within the medical imaging data 405.
- the Al model 210 can generate reconstructed medical imaging data 410 based on the medical imaging data 405 and/or the Al model can determine coordinates 415 for the medical imaging data 405 and/or the reconstructed medical imaging data 410 within one or more distributions 420 and/or one or more latent spaces generated as described herein.
- the distribution 420 can comprise and/or be based on a mapping 300 as illustrated with reference to FIG. 3, and the coordinates 415 can indicate characteristics of the medical imaging data 405 relative to sweeps represented in the mapping 300.
- the coordinates 415 can comprise or be based on a representation of the medical imaging data 405 and/or the reconstructed medical imaging data 410 in a latent space.
- the distribution 420 can comprise a latent distribution of a set of reference ultrasound imaging data represented in the latent space.
- the Al model 210 generates the reconstructed medical imaging data 410 as a closest approximation of a sequence of images in the medical imaging data 405, if acquired by an expert user.
- the Al model 210 is trained based on expert sweeps and/or data having characteristics of expert-acquired imaging data, and the Al model 210 cannot adequately reconstruct all frames generated by novice users.
- the closest approximation of the image sequence would be classified as a statistical outlier based on the coordinates 415 within the distribution 420 (e.g., causing a high reconstruction error and/or large KL divergence), and the position of the approximation in a latent space (e.g., as illustrated by mapping 300 and/or distribution 420) can be used to suggest actions to improve the underlying data.
- various characteristics of the medical imaging data 405 can be determined in comparison to expert data (e.g., data of a trained and/or experienced user) present in the distribution 420. For example, a closest expert data point within the distribution 420 can be determined, and a difference between the position of the medical imaging data 405 and/or the reconstructed medical imaging data 410 and the closest expert data point can be determined. Based on the difference between these two points (e.g., a magnitude and direction of the difference), one or more recommendations can be generated.
- expert data e.g., data of a trained and/or experienced user
- a recommendation can be generated to reverse the direction of probe movement when a corresponding sweep is repeated (e.g., when the medical imaging data 405 is positioned in the upper right portion 315 of the mapping 300 of FIG. 3).
- the distribution 420 can comprise various dimensions each representing different characteristics of a sweep acquisition, and the workflow 400 can accordingly be performed to generate various recommendations based on various differences between medical imaging data 405 and/or reconstructed medical imaging data 410 and respective closest expert data.
- the recommendations may be provided to a user via a user interface, such as user interface 125.
- the recommendation may be provided as text on a display, such as display 138.
- the recommendation may be a graphic icon (e.g., a thumbs up sign for a good or expert scan) or color coding (e.g., shade images that should be removed in red).
- the workflow 400 can be performed any number of times, for example, to evaluate different sets of medical imaging data 405, such as individual datasets representing discrete sweep acquisitions and/or portions of sweep acquisitions.
- FIG. 5 is a flow diagram illustrating a process 500 for evaluating ultrasound imaging data, according to principles of the present disclosure.
- the process 500 can be performed using the system 100 of FIG. 1.
- the process 500 can comprise at least a portion of workflows 200 of FIG. 2 and/or 400 of FIG. 4, and/or the process 500 can apply the mapping 300 of FIG. 3 to determine one or more recommendations and/or to otherwise characterize or evaluate medical imaging data.
- ultrasound imaging data is received.
- the ultrasound imaging data can correspond to a set of sweep acquisitions or other imaging data acquired according to a protocol.
- the ultrasound imaging data is acquired using an ultrasound probe.
- the ultrasound imaging data can be received in real time (e.g., as it is acquired during examination of a subject) or subsequent to acquisition.
- the ultrasound imaging data received at block 510 can be data acquired by a non-expert user of an ultrasound imaging system, such as a novice user, an untrained user, or a minimally-trained user.
- the ultrasound imaging data can comprise images (e.g., image frames), signals (e.g., echo signals), and/or supplemental data associated with acquisition of ultrasound images (e.g., settings information or other parameters).
- the ultrasound imaging data received at block 510 can include sensor data captured via one or more of an accelerometer, a magnetometer, a gyroscope, an electromagnetic localization sensor, or a combination thereof.
- the foregoing sensors and/or other sensors can be included in an ultrasound imaging system, within an ultrasound probe, and/or in separate components or modules.
- the sensor data can be captured using an inertial measurement unit (IMU).
- IMU inertial measurement unit
- the ultrasound imaging data received at block 510 can include a sequence of frames corresponding to a sweep acquisition and/or one or more characteristics of sweeps, such as a sweep number or other identifier (e.g., to identify a sweep within a predetermined sequence or series of sweeps) and/or an intended location of a sweep (e.g., relative to an anatomy of a subject). Additionally or alternatively, the ultrasound imaging data can further include the approximate gestational age/trimester, EHR data, and/or any other information that can impact the outcome of one or more calculations to be generated using the data.
- a sweep number or other identifier e.g., to identify a sweep within a predetermined sequence or series of sweeps
- an intended location of a sweep e.g., relative to an anatomy of a subject.
- the ultrasound imaging data can further include the approximate gestational age/trimester, EHR data, and/or any other information that can impact the outcome of one or more calculations to be generated using the data.
- a representation of the received ultrasound imaging data is generated in a latent space.
- a VAE and/or other model e.g., 210 of FIG. 2 can process the received ultrasound imaging data to generate a representation of the ultrasound imaging data relative to characteristics of expert data.
- the representation can be compared to a latent distribution of the reference ultrasound imaging data in the latent space.
- the comparison can be to identify a closest subset of the set of reference ultrasound imaging data, which can be a specific sweep acquisition of an expert user that is most similar to a sweep acquisition of a non-expert user.
- the reference ultrasound imaging data can comprise expert-acquired sweeps, other expert-acquired data, and/or other data having characteristics of expert-acquired data, which can comprise simulated data.
- the comparison performed at block 530 can be performed using an Al model (e.g., Al model 210), such as a neural network, a machine learning model, and/or a VAE.
- the process 500 can include training the Al model using the reference ultrasound imaging data, such as training a model according to the workflow 200 of FIG. 2.
- the process 500 can include generating and/or accessing a training dataset using the set of reference ultrasound imaging data and training a neural network using the training dataset.
- the comparison performed at block 530 can comprise evaluating the representation of the received ultrasound imaging data using a mapping of the set of reference ultrasound imaging data in a latent space.
- the comparison can comprise accessing or generating the mapping 300 of FIG. 3, which can be a mapping of the reference ultrasound imaging data in a latent space, and determining coordinates for the received ultrasound imaging data within the mapping 300 (e.g., represented as a point within the mapping 300).
- the location of the received ultrasound imaging data can then be compared to one or more additional points, such as a point for a closest subset of the reference ultrasound imaging data.
- the mapping 300 can be used to determine a closest expert acquisition in the reference ultrasound imaging data to a non-expert acquisition in the received ultrasound imaging data, and a difference between these two points within the mapping can be used to make recommendations.
- one or more recommendations are generated based on the comparison performed at block 530.
- the one or more recommendations can be based on a difference between the received ultrasound imaging data and the identified closest subset of the set of reference ultrasound imaging data.
- the difference between the received ultrasound imaging data and the identified closest subset of the set of reference ultrasound imaging data can be based, at least in part, on a reconstruction error, a Kullback-Leibler divergence, or both.
- the reconstruction error and/or Kullbac-Leibler divergence can indicate how “far” an input image sequence is from a closest expert sweep.
- a recommendation can comprise a recommendation to improve a quality of the received ultrasound imaging data, a recommendation to repeat at least a portion of the set of sweep acquisitions, or a combination thereof.
- the recommendation can be to adjust one or more settings or parameters (e.g., to improve contrast, resolution, or zoom), to discard one or more frames of ultrasound imaging data, to repeat at least a portion of a sweep acquisition, to change a direction, position, and/or orientation of an ultrasound probe, and so forth.
- a recommendation can be applied automatically (e.g., to automatically remove unnecessary frames from the received ultrasound imaging data).
- whether a recommendation can be applied automatically or whether additional user input is required can be determined on a feature-by-feature basis.
- a recommendation can be provided to a user (e.g., via graphical user interface).
- the process 500 can further include receiving updated ultrasound imaging data based on one or more recommendations generated at block 540 and evaluating the updated ultrasound imaging data (e.g., by repeating at least a portion of the process 500).
- a recommendation can be to reverse the probe direction during a sweep acquisition
- the updated ultrasound imaging data can correspond to a repetition of the sweep acquisition during which the user reverses the probe direction in response to the recommendation.
- the process 500 can include evaluating one or more characteristics of an anatomy based on received and/or updated ultrasound imaging data, such as using Al techniques or other techniques. For example, one or more fetal health characteristics can be determined.
- gestational estimations e.g., age, fetal viability, multiple gestation, etc.
- gestational estimations can be automatically extracted from the received and/or updated ultrasound imaging data using artificial intelligence techniques.
- gestational estimations and/or other operations can be performed after one or more recommendations generated at block 540 are implemented (e.g., automatically and/or by a user).
- the process 500 can be performed in any order, including performing one or more operations in parallel and/or repeating one or more operations. Additionally, operations can be added to or removed from the process 500 without deviating from the teachings of the present disclosure.
- systems and related methods disclosed herein can evaluate medical imaging data associated with sweep acquisitions (e.g., performed by a novice user) in comparison to reference medical imaging data (e.g., sweep acquisitions performed by expert users) and generate various recommendations based on the comparison, such as recommendations for improving the quality of the sweep acquisitions. That is, by evaluating the difference between the sweep acquisitions and closest expert sweeps, the disclosed technology can be used by minimally trained and/or inexperienced users to improve the quality of sweep acquisitions, such that novice users can more accurately and efficiently evaluate subjects.
- Disclosed embodiments can be used, for example, in resource-constrained settings and/or when trained and/or experienced users (e.g., expert users) are unavailable or substantially unavailable.
- novice users can be instructed to more accurately simulate actions of expert users. While examples are described herein related to evaluating sweeps performed by novice users, the disclosed technology can additionally or alternatively be applied to guided sweep quality control and/or sweeps acquired by trained and/or experienced users of ultrasound imaging systems.
- the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein.
- the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the abovedescribed systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
- processors described herein can be implemented in hardware, software, and/or firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention.
- the functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general-purpose processing circuits which are programmed responsive to executable instructions to perform the functions described herein.
- ASICs application specific integrated circuits
- the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
L'invention divulgue des techniques pour l'évaluation d'acquisitions par balayage. Des données échographiques sont reçues pour un ensemble d'acquisitions par balayage. Les données échographiques reçues peuvent comprendre des données acquises par un utilisateur non expert. Une représentation des données échographiques dans un espace latent est générée. La représentation des données échographiques est comparée à une distribution latente de données échographiques de référence et une recommandation est générée sur la base d'une différence entre la représentation des données échographiques et un sous-ensemble le plus proche des données échographiques de référence. La recommandation peut être d'améliorer une qualité des données échographiques reçues ou de répéter au moins une partie de l'ensemble d'acquisitions par balayage. Les données échographiques de référence peuvent être des données acquises par un utilisateur expert ou des données ayant des caractéristiques de données acquises par un expert. La comparaison des données échographiques reçues aux données échographiques de référence peut être effectuée à l'aide d'un réseau neuronal.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363547564P | 2023-11-07 | 2023-11-07 | |
| US63/547,564 | 2023-11-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025098957A1 true WO2025098957A1 (fr) | 2025-05-15 |
Family
ID=93607835
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/081115 Pending WO2025098957A1 (fr) | 2023-11-07 | 2024-11-05 | Systèmes et procédés d'évaluation de balayages échographiques |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025098957A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6443896B1 (en) | 2000-08-17 | 2002-09-03 | Koninklijke Philips Electronics N.V. | Method for creating multiplanar ultrasonic images of a three dimensional object |
| US6530885B1 (en) | 2000-03-17 | 2003-03-11 | Atl Ultrasound, Inc. | Spatially compounded three dimensional ultrasonic images |
| US20210327303A1 (en) * | 2017-01-24 | 2021-10-21 | Tienovix, Llc | System and method for augmented reality guidance for use of equipment systems |
| WO2023274512A1 (fr) * | 2021-06-29 | 2023-01-05 | Brainlab Ag | Procédé d'apprentissage et d'utilisation d'un algorithme d'apprentissage profond pour comparer des images médicales sur la base de représentations à dimensionnalité réduite |
| US20230329674A1 (en) * | 2022-04-19 | 2023-10-19 | Koninklijke Philips N.V. | Ultrasound imaging |
-
2024
- 2024-11-05 WO PCT/EP2024/081115 patent/WO2025098957A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6530885B1 (en) | 2000-03-17 | 2003-03-11 | Atl Ultrasound, Inc. | Spatially compounded three dimensional ultrasonic images |
| US6443896B1 (en) | 2000-08-17 | 2002-09-03 | Koninklijke Philips Electronics N.V. | Method for creating multiplanar ultrasonic images of a three dimensional object |
| US20210327303A1 (en) * | 2017-01-24 | 2021-10-21 | Tienovix, Llc | System and method for augmented reality guidance for use of equipment systems |
| WO2023274512A1 (fr) * | 2021-06-29 | 2023-01-05 | Brainlab Ag | Procédé d'apprentissage et d'utilisation d'un algorithme d'apprentissage profond pour comparer des images médicales sur la base de représentations à dimensionnalité réduite |
| US20230329674A1 (en) * | 2022-04-19 | 2023-10-19 | Koninklijke Philips N.V. | Ultrasound imaging |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12285295B2 (en) | Ultrasound system with a neural network for producing images from undersampled ultrasound data | |
| JP7330207B2 (ja) | 適応的超音波スキャニング | |
| US11488298B2 (en) | System and methods for ultrasound image quality determination | |
| EP3596699B1 (fr) | Mesures anatomiques à partir de données ultrasonores | |
| CN112638273B (zh) | 生物测定测量和质量评估 | |
| US12350104B2 (en) | Systems and methods for controlling volume rate | |
| EP4041086B1 (fr) | Systèmes et procédés d'optimisation d'images | |
| JP2012506283A (ja) | 3次元超音波画像化 | |
| EP3897394B1 (fr) | Systèmes et procédés d'indexage de trame et de revue d'image | |
| US11903760B2 (en) | Systems and methods for scan plane prediction in ultrasound images | |
| US12193882B2 (en) | System and methods for adaptive guidance for medical imaging | |
| CN114098795B (zh) | 用于生成超声探头引导指令的系统和方法 | |
| US12422548B2 (en) | Systems and methods for generating color doppler images from short and undersampled ensembles | |
| CN104887271B (zh) | 输出包括在感兴趣区域中的血流信息的方法、设备和系统 | |
| WO2025098957A1 (fr) | Systèmes et procédés d'évaluation de balayages échographiques | |
| CN110801245B (zh) | 超声波图像处理装置以及存储介质 | |
| WO2025087746A1 (fr) | Systèmes et procédés de dépistage par imagerie | |
| WO2024013114A1 (fr) | Systèmes et procédés de criblage d'imagerie | |
| US20240404048A1 (en) | Training medical image annotation models | |
| WO2025140888A1 (fr) | Configuration de systèmes et de procédés de données d'imagerie ultrasonore basés sur un protocole | |
| WO2025124940A1 (fr) | Systèmes et procédés de dépistage par imagerie |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24809561 Country of ref document: EP Kind code of ref document: A1 |