[go: up one dir, main page]

WO2025155688A1 - Apprentissage cohérent de mesure cyclique pour reconstruction d'apprentissage profond basée sur la physique et guidage d'incertitude - Google Patents

Apprentissage cohérent de mesure cyclique pour reconstruction d'apprentissage profond basée sur la physique et guidage d'incertitude

Info

Publication number
WO2025155688A1
WO2025155688A1 PCT/US2025/011821 US2025011821W WO2025155688A1 WO 2025155688 A1 WO2025155688 A1 WO 2025155688A1 US 2025011821 W US2025011821 W US 2025011821W WO 2025155688 A1 WO2025155688 A1 WO 2025155688A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
space data
synthesized
undersampled
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/011821
Other languages
English (en)
Inventor
Mehmet Akçakaya
Chi Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Minnesota Twin Cities
University of Minnesota System
Original Assignee
University of Minnesota Twin Cities
University of Minnesota System
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Minnesota Twin Cities, University of Minnesota System filed Critical University of Minnesota Twin Cities
Publication of WO2025155688A1 publication Critical patent/WO2025155688A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/561Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution by reduction of the scanning time, i.e. fast acquiring systems, e.g. using echo-planar pulse sequences
    • G01R33/5611Parallel magnetic resonance imaging, e.g. sensitivity encoding [SENSE], simultaneous acquisition of spatial harmonics [SMASH], unaliasing by Fourier encoding of the overlaps using the temporal dimension [UNFOLD], k-t-broad-use linear acquisition speed-up technique [k-t-BLAST], k-t-SENSE
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present disclosure provides a method for training a deep learning model to solve an inverse problem.
  • a method for training a physics-driven deep learning model to reconstruct an image from k-space data acquired with a magnetic resonance imaging (MRI) system is provided.
  • the method includes accessing undersampled k-space data with a computer system, where the undersampled k-space data have been acquired from a subject using an MRI system; accessing a physics-driven deep learning (PD-DL) model with the computer system; 1 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 inputting the undersampled k-space data to the PD-DL model, generating synthesized k-space data corresponding to k-space locations not sampled in the undersampled k-space data; generating uncertainty measurement data by computing a cyclic measurement may include between the synthesized k-space data and the undersampled k-space data; and updating the PD- DL model by incorporating an additional regularizer based on the uncertainty measurement data.
  • PD-DL physics-driven deep learning
  • FIG. 2 illustrates an example of uncertainty estimation using measurement consistency.
  • SSDU data undersampling
  • ⁇ n ⁇ show simulated k-space undersampling patterns with the same distribution as .
  • FIG. 4 is a flowchart setting forth the steps of an example method for uncertainty-guided deep learning model reconstruction using a cyclic measurement consistency.
  • FIG. 5 shows representative examples of the proposed uncertainty estimation ⁇ based on measurement consistency. The uncertainty maps, ⁇ ⁇ visibly captures the artifacts in the error image corresponding to the network output.
  • the proposed approach reduces aliasing artifacts associated with the standard PD-DL methods.
  • the corresponding error maps are scaled by a factor of 5 for display purposes.
  • FIG. 7 is a block diagram of an example system for uncertainty-guided deep learning reconstruction in accordance with some aspects of the present disclosure.
  • FIG. 8 is a block diagram of example components that can implement the system of FIG.7.
  • the disclosed systems and methods implement an uncertainty estimation process that focuses on the data fidelity component of a physics-driven deep learning (PD-DL) model by characterizing the cyclic consistency between different forward models. Subsequently, this uncertainty estimate is used to guide the training of the PD-DL model. It is an advantage of the disclosed systems and methods that this uncertainty-guided PD-DL strategy improves reconstruction quality.
  • cyclic consistency is used it to perform uncertainty estimation through an unrolled PD-DL network.
  • these disclosed systems and methods also provide an improved approach for the self-supervised (e.g., reference-less) training of PD-DL reconstruction in the scarce data regime, such as when using high sub-sampling and/or high acceleration rates.
  • the cyclic consistency-based techniques 3 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 described in the present disclosure simulate new measurements based on inference results with a different known (e.g., forward) model, whose inference results should be consistent with the original data (e.g., the acquired data).
  • cyclic consistency can be used, along with the aim that PD-DL reconstruction should be generalizable to undersampling patterns with similar distributions as the sampling pattern used for data acquisition, to improve multi-mask self-supervised learning, in a method that can be referred to as CC-SSDU.
  • the disclosed systems and methods are applicable to inverse problems. For illustrative purposes, an example is provided below with respect to MRI reconstruction.
  • a deep learning model e.g., a neural network, a PD-DL model
  • a deep learning model is run on acquired data and to synthesize unacquired data with similar characteristics as the acquired data (e.g., shifted k-space trajectory).
  • this can be achieved by using a multi-mask approach, where pairs of disjoint sets ⁇ ⁇ , ⁇ ⁇ ⁇ are generated for ⁇ ⁇ ⁇ 1, ... , ⁇ such that ⁇ ⁇ ⁇ ⁇ ⁇ which is the index set of acquired k-space points. Then training performed using: argmin ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ [0020] where network (parametrized by ⁇ ) output for input k-space ⁇ ⁇ and corresponding encoding operator ⁇ ⁇ .
  • the network only sees the points in ⁇ ⁇ and learns to predict the points in ⁇ ⁇ , which are disjoint. By cycling through ⁇ of these, we cover the full k-space.
  • the further masking of the sub-sampling data can lead to data scarcity.
  • a self-supervised PD-DL network may degrade faster than 4 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 a supervised PD-DL network at high acceleration rates.
  • the systems and methods described in the present disclosure overcome this problem.
  • the disclosed systems and method can implement the following framework.
  • ⁇ ⁇ ⁇ 1, ⁇ ⁇ 1,2 ⁇ ⁇ 1, ... ⁇ is an equispaced undersampling pattern with acceleration rate ⁇ .
  • the starting pattern, ⁇ could also be a random sampling pattern, a non-Cartesian sampling pattern, and so on.
  • a network can then be trained to interpolate the missing lines in a different spot, say ⁇ ⁇ ⁇ ⁇ 2, ⁇ ⁇ 2,2 ⁇ ⁇ 2, ... ⁇ from ⁇ . Then, this same network can be used to interpolate ⁇ ⁇ ⁇ 3, ⁇ ⁇ 3,2 ⁇ ⁇ 3, ... ⁇ from ⁇ .
  • This process can be repeated ⁇ -1 times to interpolate the lines at ⁇ ⁇ 1,2 ⁇ ⁇ 1, ... ⁇ , which is a subset of ⁇ .
  • the iteratively interpolated lines can be compared with the acquired lines to guide the training process, without requiring any calibration data.
  • This framework can be formulated in the context of physics-driven neural networks, or other suitable deep learning models, by posing it as using a reconstruction from ⁇ to estimate a new set of lines on a trajectory ⁇ ⁇ and then using these new lines to re-estimate ⁇ , to ensure cyclic consistency between trajectories. This leads to the following loss function argmin ⁇ ⁇ ⁇ ⁇ , ⁇ , ⁇ ⁇ .
  • the mean and variance of the outputs can be computed to characterize the uncertainty. Because only one true undersampling pattern ⁇ may be available at test time, a cyclic measurement consistency can be used to simulate additional patterns.
  • the output of the unrolled network parametrized by ⁇ be denoted as: ⁇ ⁇ ⁇ ⁇ , ⁇ ; ⁇ , (4) [0031]
  • ⁇ [0032] can patterns ⁇ ⁇ ⁇ from a similar distribution as ⁇ , as follows: ⁇ ⁇ ⁇ ⁇ , (5) [0033] ⁇ ⁇ and coil sensitivity profiles matching ⁇ ⁇ .
  • a noise term can also be added to Eqn. (5).
  • similar distribution of the undersampling patterns here assumes matched acceleration rate with same number of central lines and the same underlying distribution (e.g., equispaced or variable density random with same underlying distribution).
  • a simple example of equispaced sampling at ⁇ ⁇ 3 is depicted in FIG. 1. Once these rate ⁇ accelerated measurements are simulated, then the PD-DL reconstruction can be performed again with the corresponding inputs to the unrolled network as: ⁇ ⁇ ⁇ ⁇ , ⁇ ; ⁇ . (6) [0034] ⁇ ⁇ ⁇ ⁇ , ⁇ ; ⁇ . (7) [0035] incur more error than the first step alone, which may be imperfect to begin with.
  • Lipschitz bounds on the proximal operator neural network and a minimum eigenvalue of the forward operator can be used to show that the overall l ⁇ error is within a constant of the error from the first step.
  • 6 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 [0036]
  • the unrolled network has an estimation error as follows.
  • this additional regularizer may be added via a score function of a Gaussian distribution, as follows: argmin
  • ⁇ ⁇ is to keep the characteristics original sampling trajectory ⁇ but just transform it. For instance in the previous example, when ⁇ is equispaced sampling, ⁇ are just shifted versions of equispaced trajectories (and ⁇ ⁇ ⁇ ⁇ 1). In another example, ⁇ could be a spiral trajectory, and ⁇ ⁇ may be different rotations of that trajectory.
  • Another approach is to bring First recall the structure and objective of the PD-DL network is to solve a regularized least squares problem:
  • Variable splitting with quadratic penalty can be used as an example, leading to ⁇ ⁇ ⁇ ⁇
  • both the first network ( ⁇ ⁇ ) and this new network can be learned together end- to-end (e.g. sharing parameters for the regularization part).
  • Eqn. (6) suggests that these reconstructions can reliably map to y ⁇ using the corresponding forward mappingE ⁇ x ⁇ ⁇ n .
  • the following loss function can be used to incorporate cyclic consistency to multi-mask self-supervised PD-DL: ⁇ 1 M ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ UMN 2024-020 920171.00622 [0057]
  • is a weight term
  • L ⁇ ⁇ ⁇ denotes a loss function, such mean squared error (MSE), ⁇ 1 ⁇ ⁇ 2 loss, or the like.
  • MSE mean squared error
  • the first term corresponds to a multi-masking strategy as in MM- with multiple pairs of disjoint ⁇ m, ⁇ m subsets of ⁇ .
  • the second ⁇ term incorporates cyclic consistency with respect to acquired data y ⁇ by applying E ⁇ over x ⁇ ⁇ n (e.g., as shown in FIG. 3).
  • This consistency can be denoted by ⁇ n ⁇ .
  • MM-SSDU augments SSDU by training performed with the following loss: M m in ⁇ ⁇ 1 L y , E f ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ m m ⁇ ⁇ y ⁇ m , E ⁇ m ; ⁇ ⁇ ⁇ ⁇ ⁇ , (21) ⁇ [0058] [0059] now a as of an example method for a physics-driven deep learning model reconstruction using cyclic measurement consistency.
  • the method includes accessing subsampled data with a computer system, as indicated at step 302. Accessing the subsampled data may include retrieving such data from a memory or other suitable data storage device or medium. Additionally or alternatively, accessing the subsampled data may include acquiring such data with a suitable imaging or measurement system and transferring or otherwise communicating the data to the computer system, which may be a part of the computer system. [0061] In some non-limiting examples, the subsampled data may be undersampled medical imaging data, such as undersampled k-space data acquired with an MRI system. [0062] A deep learning model, such as a PD-DL model or other neural network model, is then accessed with the computer system, as indicated at step 304.
  • a deep learning model such as a PD-DL model or other neural network model
  • retrieving the deep learning model 10 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 can also include retrieving, constructing, or otherwise accessing the particular model architecture to be implemented. For instance, data pertaining to the layers in a neural network architecture (e.g., number of layers, type of layers, ordering of layers, connections between layers, hyperparameters for layers) may be retrieved, selected, constructed, or otherwise accessed. [0064] The subsampled data are then input to the deep learning model, generating output as synthesized data, as indicated at step 306. As described above, the synthesized data correspond to sample points not sampled in the original subsampled data.
  • the synthesized data correspond to sample points not sampled in the original subsampled data.
  • noise can be added to the synthesized data in order to keep signal-to-noise ratio (SNR) constant or otherwise consistent between the subsampled data and the synthesized data.
  • SNR signal-to-noise ratio
  • random noise can be added to the synthesized data, such as random Gaussian noise.
  • a noise profile can be estimated from the subsampled data and the estimated noise profile can be added to the synthesized data.
  • a measurement consistency between the original subsampled data and the synthesized data can then be computed, as indicated at step 308. For instance, the synthesized data can be used to re-estimate the subsampled data, as described above.
  • An estimate of uncertainty between the re-estimated subsampled data and the original subsampled data can be computed to ensure cyclic consistency between the sets of data points. This process can be repeated to estimate additional sets of synthesized data, checking for cyclic consistency along the way, as indicated at decision block 310.
  • the uncertainty measurement data generated in this process is stored at step 312 and used to guide training of the deep learning model at step 314. For instance, the uncertainty measurement data can be used to formulate an additional regularization on the inverse problem being solved by the deep learning model. Training the deep learning model may include initializing the model, such as by computing, estimating, or otherwise selecting initial model parameters (e.g., weights, biases, or both).
  • the deep learning model receives the inputs for a training example and generates an output using the bias for each node, and the connections between each node and the corresponding weights.
  • training data can be input to the initialized deep learning model, generating an output.
  • the output can be passed to a loss function, such as one of the loss functions described in the present disclosure, to compute an error.
  • the current deep learning model can then be updated based on the calculated error (e.g., using backpropagation methods based on the calculated error).
  • the current deep learning model can be updated by updating the model parameters (e.g., weights, 11 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 biases, or both) in order to minimize the loss according to the loss function.
  • the training continues until a training condition is met.
  • the training condition may correspond to, for example, a predetermined number of training examples being used, a minimum accuracy threshold being reached during training and validation, a predetermined number of validation iterations being completed, and the like.
  • the training condition has been met (e.g., by determining whether an error threshold or other stopping criterion has been satisfied)
  • the current deep learning model and its associated model parameters represent the trained deep learning model.
  • the baseline PD-DL networks for solving Eqn. (1) were trained using both supervised and self-supervised learning. A total of 320 slices from 16 subjects were used for training. The self-supervised PD-DL network was used for subsequent uncertainty estimation and the training of the proposed uncertainty-guided PD-DL network for solving Eqn. (9).
  • the uncertainty-guided PD-DL network was trained using only undersampled data, without requiring reference fully-sampled datasets.
  • the proximal operator in Eqn. (3) was implemented using a convolutional neural network (CNN) with 592,128 learnable parameters.
  • the data fidelity units were implemented using an unrolled conjugate gradient approach with 10 conjugate gradient steps. All implementations used Pytorch 1.10, and experiments were performed on a server with NVIDIA A100 GPU.
  • a normalized l ⁇ -l ⁇ loss was minimized using the ADAM optimizer with a learning rate of 0.0003 for 200 epochs.
  • FIG. 5 depicts example uncertainty maps generated using the approach described above.
  • the standard deviation map obtained using Eqn. (8) visibly matches the error in the reconstruction in two representative slices.
  • a computing device 650 can receive one or more types of data (e.g., undersampled k-space data, other subsampled data) from data source 602.
  • computing device 650 can execute at least a portion of a uncertainty-guided deep learning reconstruction system 604 to reconstruct images from data received from the data source 602, or to otherwise solve other inverse problems modeled by a deep learning model.
  • the computing device 650 can communicate information about data received from the data source 602 to a server 652 over a communication network 654, which can execute at least a portion of the uncertainty- guided deep learning reconstruction system 604.
  • the server 652 can return information to the computing device 650 (and/or any other suitable computing device) indicative of an output of the uncertainty-guided deep learning reconstruction system 604.
  • computing device 650 and/or server 652 can be any suitable computing device or combination of devices, such as a desktop computer, a laptop computer, a smartphone, a tablet computer, a wearable computer, a server computer, a virtual 13 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 machine being executed by a physical computing device, and so on.
  • the computing device 650 and/or server 652 can also reconstruct images from the data.
  • data source 602 can be any suitable source of data (e.g., measurement data, images reconstructed from measurement data, processed image data), such as a medical imaging system, another computing device (e.g., a server storing measurement data, images reconstructed from measurement data, processed image data), and so on.
  • data source 602 can be local to computing device 650.
  • data source 602 can be incorporated with computing device 650 (e.g., computing device 650 can be configured as part of a device for measuring, recording, estimating, acquiring, or otherwise collecting or storing data).
  • data source 602 can be connected to computing device 650 by a cable, a direct wireless link, and so on.
  • communication network 654 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), other types of wireless network, a wired network, and so on.
  • Wi-Fi network which can include one or more wireless routers, one or more switches, etc.
  • a peer-to-peer network e.g., a Bluetooth network
  • a cellular network e.g., a 3G network, a 4G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.
  • communication network 654 can be a local area network, a wide area network, a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks.
  • Communications links shown in FIG. 6 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, and so on.
  • FIG. 7 an example of hardware 700 that can be used to implement data source 602, computing device 650, and server 652 in accordance with some embodiments of the systems and methods described in the present disclosure is shown. [0078] As shown in FIG.
  • computing device 650 can include a processor 702, a display 704, one or more inputs 706, one or more communication systems 708, and/or memory 710.
  • processor 702 can be any suitable hardware processor or combination of processors, such as a central processing unit (“CPU”), a graphics 14 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 processing unit (“GPU”), and so on.
  • CPU central processing unit
  • GPU graphics 14 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 processing unit
  • display 704 can include any suitable display devices, such as a liquid crystal display (“LCD”) screen, a light-emitting diode (“LED”) display, an organic LED (“OLED”) display, an electrophoretic display (e.g., an “e- ink” display), a computer monitor, a touchscreen, a television, and so on.
  • inputs 706 can include any suitable input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, and so on.
  • communications systems 708 can include any suitable hardware, firmware, and/or software for communicating information over communication network 654 and/or any other suitable communication networks.
  • communications systems 708 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • communications systems 708 can include hardware, firmware, and/or software that can be used to establish a Wi-Fi connection, a Bluetooth connection, a cellular connection, an Ethernet connection, and so on.
  • memory 710 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 702 to present content using display 704, to communicate with server 652 via communications system(s) 708, and so on.
  • Memory 710 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • memory 710 can include random-access memory (“RAM”), read-only memory (“ROM”), electrically programmable ROM (“EPROM”), electrically erasable ROM (“EEPROM”), other forms of volatile memory, other forms of non-volatile memory, one or more forms of semi-volatile memory, one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • RAM random-access memory
  • ROM read-only memory
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable ROM
  • other forms of volatile memory other forms of non-volatile memory
  • one or more forms of semi-volatile memory one or more flash drives, one or more hard disks, one or more solid state drives, one or more optical drives, and so on.
  • memory 710 can have encoded thereon, or otherwise stored therein, a computer program for controlling operation of computing device 650.
  • data source 602 can include a processor 722, one or more data acquisition systems 724, one or more communications systems 726, and/or memory 728.
  • processor 722 can be any suitable hardware processor or combination of processors, such as a CPU, a GPU, and so on.
  • the one or more data 16 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 acquisition systems 724 are generally configured to acquire data, images, or both, and can include a medical imaging system, such as an MRI system.
  • the one or more data acquisition systems 724 can include any suitable hardware, firmware, and/or software for coupling to and/or controlling operations of a medical imaging system, such as an MRI system.
  • one or more portions of the data acquisition system(s) 724 can be removable and/or replaceable.
  • data source 602 can include any suitable inputs and/or outputs.
  • data source 602 can include input devices and/or sensors that can be used to receive user input, such as a keyboard, a mouse, a touchscreen, a microphone, a trackpad, a trackball, and so on.
  • data source 602 can include any suitable display devices, such as an LCD screen, an LED display, an OLED display, an electrophoretic display, a computer monitor, a touchscreen, a television, etc., one or more speakers, and so on.
  • communications systems 726 can include any suitable hardware, firmware, and/or software for communicating information to computing device 650 (and, in some embodiments, over communication network 654 and/or any other suitable communication networks).
  • communications systems 726 can include one or more transceivers, one or more communication chips and/or chip sets, and so on.
  • memory 728 can include any suitable storage device or devices that can be used to store instructions, values, data, or the like, that can be used, for example, by processor 722 to control the one or more data acquisition systems 724, and/or receive data from the one or more data acquisition systems 724; to generate images from data; present content (e.g., data, images, a user interface) using a display; communicate with one or more computing devices 650; and so on.
  • Memory 728 can include any suitable volatile memory, non-volatile memory, storage, or any suitable combination thereof.
  • 17 QB ⁇ 920171.00622 ⁇ 94070024.2 UMN 2024-020 920171.00622 processor 722 can execute at least a portion of the program to generate images, transmit information and/or content (e.g., data, images, a user interface) to one or more computing devices 650, receive information and/or content from one or more computing devices 650, receive instructions from one or more devices (e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, etc.), and so on.
  • any suitable computer-readable media can be used for storing instructions for performing the functions and/or processes described herein.
  • computer-readable media can be transitory or non-transitory.
  • One or more components may reside within a process or thread of execution, may be localized on one computer, may be distributed between two or more computers or other processor devices, or may be included within another component (or system, module, and so on).
  • devices or systems disclosed herein can be utilized or installed using methods embodying aspects of the disclosure.
  • description herein of particular features, capabilities, or intended purposes of a device or system is generally intended to inherently include disclosure of a method of using such features for the intended purposes, a method of implementing such capabilities, and a method of installing disclosed (or otherwise known) components to support these purposes or capabilities.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

L'invention concerne des systèmes et des procédés d'entraînement d'un réseau neuronal, ou d'un autre modèle d'apprentissage automatique, à l'aide d'une mesure cyclique de cohérence dans une structure d'apprentissage auto-supervisée. En général, les systèmes et les procédés de l'invention mettent en œuvre un processus d'estimation d'incertitude qui se concentre sur le composant de fidélité de données d'un modèle d'apprentissage profond basé sur la physique (PD-DL) en caractérisant la cohérence cyclique entre différents modèles directs. Ensuite, cette estimation d'incertitude est utilisée pour guider l'apprentissage du modèle PD-DL. Un avantage des systèmes et des procédés de l'invention est que cette stratégie PD-DL guidée par incertitude améliore la qualité de reconstruction.
PCT/US2025/011821 2024-01-18 2025-01-16 Apprentissage cohérent de mesure cyclique pour reconstruction d'apprentissage profond basée sur la physique et guidage d'incertitude Pending WO2025155688A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463622229P 2024-01-18 2024-01-18
US63/622,229 2024-01-18

Publications (1)

Publication Number Publication Date
WO2025155688A1 true WO2025155688A1 (fr) 2025-07-24

Family

ID=96471905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/011821 Pending WO2025155688A1 (fr) 2024-01-18 2025-01-16 Apprentissage cohérent de mesure cyclique pour reconstruction d'apprentissage profond basée sur la physique et guidage d'incertitude

Country Status (1)

Country Link
WO (1) WO2025155688A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140286560A1 (en) * 2011-11-06 2014-09-25 Mayo Foundation For Medical Education And Research Method for calibration-free locally low-rank encouraging reconstruction of magnetic resonance images
US20200302596A1 (en) * 2019-03-18 2020-09-24 Siemens Healthcare Gmbh Automated uncertainty estimation of lesion segmentation
US20220357415A1 (en) * 2021-04-30 2022-11-10 Regents Of The University Of Minnesota Parallel transmission magnetic resonance imaging with a single transmission channel rf coil using deep learning
WO2023114317A1 (fr) * 2021-12-14 2023-06-22 Regents Of The University Of Minnesota Reconstruction non linéaire à suppression de bruit d'images par résonance magnétique
WO2023186609A1 (fr) * 2022-03-31 2023-10-05 Koninklijke Philips N.V. Débruitage d'images rm basé sur un apprentissage profond

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140286560A1 (en) * 2011-11-06 2014-09-25 Mayo Foundation For Medical Education And Research Method for calibration-free locally low-rank encouraging reconstruction of magnetic resonance images
US20200302596A1 (en) * 2019-03-18 2020-09-24 Siemens Healthcare Gmbh Automated uncertainty estimation of lesion segmentation
US20220357415A1 (en) * 2021-04-30 2022-11-10 Regents Of The University Of Minnesota Parallel transmission magnetic resonance imaging with a single transmission channel rf coil using deep learning
WO2023114317A1 (fr) * 2021-12-14 2023-06-22 Regents Of The University Of Minnesota Reconstruction non linéaire à suppression de bruit d'images par résonance magnétique
WO2023186609A1 (fr) * 2022-03-31 2023-10-05 Koninklijke Philips N.V. Débruitage d'images rm basé sur un apprentissage profond

Similar Documents

Publication Publication Date Title
US10573031B2 (en) Magnetic resonance image reconstruction with deep reinforcement learning
US10671939B2 (en) System, method and computer-accessible medium for learning an optimized variational network for medical image reconstruction
CN117011673B (zh) 基于噪声扩散学习的电阻抗层析成像图像重建方法和装置
US9542761B2 (en) Generalized approximate message passing algorithms for sparse magnetic resonance imaging reconstruction
US12067652B2 (en) Correction of magnetic resonance images using multiple magnetic resonance imaging system configurations
US12106401B2 (en) Systems and methods for training machine learning algorithms for inverse problems without fully sampled reference data
US11867786B2 (en) Parameter map determination for time domain magnetic resonance
US11982725B2 (en) Parallel transmission magnetic resonance imaging with a single transmission channel RF coil using deep learning
KR20220070502A (ko) 맥스웰 병렬 이미징
Ozturkler et al. Smrd: Sure-based robust mri reconstruction with diffusion models
US11948311B2 (en) Retrospective motion correction using a combined neural network and model-based image reconstruction of magnetic resonance data
CN115294229A (zh) 用于重建磁共振成像(mri)图像的方法及设备
CN113597620A (zh) 使用神经网络的压缩感测
US20230122658A1 (en) Self-supervised joint image reconstruction and coil sensitivity calibration in parallel mri without ground truth
Barbieri et al. Circumventing the curse of dimensionality in magnetic resonance fingerprinting through a deep learning approach
CN113050009B (zh) 三维磁共振快速参数成像方法和装置
US11918337B2 (en) Magnetic resonance imaging apparatus, noise reduction method and image processing apparatus
WO2025155688A1 (fr) Apprentissage cohérent de mesure cyclique pour reconstruction d'apprentissage profond basée sur la physique et guidage d'incertitude
US20240183923A1 (en) Autocalibrated multi-shot magnetic resonance image reconstruction with joint optimization of shot-dependent phase and parallel image reconstruction
CN111856364A (zh) 一种磁共振成像方法、装置、系统及存储介质
US12044764B2 (en) Model-based Nyquist ghost correction for reverse readout echo planar imaging
Xiao et al. Diffusion model based on generalized map for accelerated MRI
Nana et al. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging
Parker et al. Rician Likelihood Loss for Quantitative MRI With Self‐Supervised Deep Learning
Sheng et al. Parallel MR image reconstruction based on triple cycle optimization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25742399

Country of ref document: EP

Kind code of ref document: A1