WO2022043910A1 - Systèmes et procédés pour accentuer automatiquement des images de pet à faible dose avec robustesse vis-à-vis des données hors distribution (ood) - Google Patents
Systèmes et procédés pour accentuer automatiquement des images de pet à faible dose avec robustesse vis-à-vis des données hors distribution (ood) Download PDFInfo
- Publication number
- WO2022043910A1 WO2022043910A1 PCT/IB2021/057826 IB2021057826W WO2022043910A1 WO 2022043910 A1 WO2022043910 A1 WO 2022043910A1 IB 2021057826 W IB2021057826 W IB 2021057826W WO 2022043910 A1 WO2022043910 A1 WO 2022043910A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pet
- images
- neural network
- dnn
- deep neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/441—AI-based methods, deep learning or artificial neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/464—Dual or multimodal imaging, i.e. combining two or more imaging modalities
Definitions
- the present disclosure relates to medical imaging systems and more particularly, relates to systems and methods for automatically providing robustness to out-of-distribution (OOD) ultra-low dose PET image enhancement using deep learning.
- OOD out-of-distribution
- PET Positron emission tomography
- the ionizing radiation involved in PET is a cause of concern for studies requiring longitudinal imaging, and imaging for radiation-sensitive populations including children and pregnant women.
- image quality is directly proportional to the number of photon-counts available for image reconstruction.
- Simultaneous PET and magnetic resonance imaging (MRI) systems typically require long acquisition times for multi-contrast MRI scans permitting PET scans of equivalent duration to be accommodated.
- Simultaneous PET and MRI enables lower dose PET imaging in comparison to PET and X-ray computed tomography (PET -CT) systems.
- PET -CT X-ray computed tomography
- PET -CT X-ray computed tomography
- Recent deep neural network (DNN) based methods for image-to- image translation enable the mapping of low-quality PET images (acquired using substantially reduced dose), coupled with the associated magnetic resonance imaging (MRI) images, to high-quality PET images.
- current DNN methods focus on applications involving test data that match the statistical characteristics of the training data closely and give little attention to evaluating the performance of these DNNs on new out-of-distnbution (OOD) acquisitions that differ from the distribution of images in the training set.
- OOD PET data could arise from several underlying factors, e.g., variations in radiotracers, anatomy, pathology, photon counts, hardware, reconstruction protocol. Other factors causing variations in data include differences in age, differences in imaging protocols, subject motion, and pathology. Such variations in the data lead to change in image features such as structure, texture, contrast, and artifacts.
- a mapping between the LDPET sinogram data and the SD-PET sinogram data can lead to some improvement in the reconstructed SD-PET images in comparison to the strategy of learning the mapping from LD-PET to SD- PET in the spatial image domain.
- the measured raw sinogram data might not always be accessible.
- the conventional systems and methods employ loss functions either exclusively in the spatial image space, or exclusively in the sinogram space, but not both.
- DNN -based methods for undersampled MRI reconstruction have shown that including a transform domain (k-space) loss function in addition to the image space loss function improves the quality of reconstructed images at higher undersampling of MRI data. While the proposed methods primarily focus on improving the accuracy of the predicted PET images using test data that is similar to training data, but they seldom focus on (i) evaluating the performance of these DNNs on new out-of-distribution (OOD) acquisitions, and (ii) quantifying the uncertainty in the predicted images.
- OOD out-of-distribution
- Modeling uncertainty in DNNs can potentially (i) inform the radiologist about the imperfections in reconstructions that may be crucial in clinical decision making or subsequent automated postprocessing of reconstructed images; and (ii) provide improved performance when the DNN is presented with OOD data.
- Recent works on Bayesian deep learning allow uncertainty estimation in DNN outputs. Some recent methods propose to estimate the uncertainty in the outputs, during training and testing phases, using stochastic layers in the DNN architecture. In the context of medical image analysis, some other recent works discuss the uncertainty estimation for medical image segmentation, and some more works and methods discuss uncertainty estimation for various medical image regression tasks such as image enhancement for diffusion MRI, image registration, and biological age estimation using MRI, respectively.
- similar frameworks for PET imaging are absent in the literature. Hence, there is a need for a method to improve the quality of the reconstructed images and robustness of the learned model in reconstructing OOD PET data in comparison to state-of-the-art methods.
- DNN deep neural network
- sinogram and uncertainty- based deep neural network (suDNN) framework for predicting SD-PET images from LD-PET images.
- sinogram and uncertainty- based deep neural network (suDNN) framework for providing improved robustness to OOD acquisitions of PET data.
- the present invention uses a devised deep-learning -based framework to synthesize standard-dose PET images (SD-PET) from ultra-low-dose PET images (uLD-PET) with improved robustness to variations in the input data.
- SD-PET standard-dose PET images
- uLD-PET ultra-low-dose PET images
- the present invention provides uncertainty quantification of the reconstructed images.
- the present invention provides ultra-low-dose imaging in PET scanners with reduced dosage requirement and consequently reducing the exposure to ionizing radiations, as well as provides faster imaging, which in turn improves throughput of PET scanners.
- the present invention particularly discloses a sinogram and uncertainty- based deep neural network (suDNN) framework.
- the main advantages of the present invention are (i) the significant improvement in accuracy of reconstructed images when presented with out-of-distribution (OOD) data, (ii) the quantification of uncertainty in the prediction of SD-PET images.
- the present invention does not require a very close match between training and testing data and provides more information regarding the quality of prediction made by the DNN. This allows inclusion of a wide variety of dataset without the necessity for re-training the DNN.
- the present invention models the underlying imaging physics as well as the inherent variance of the training data to provide improved robustness to OOD PET data.
- SD-PET standard-dose positron emission tomography
- the method comprises the steps of: training at least one deep neural network (DNN) using a training data set, wherein the deep neural network (DNN) comprises an input layer, an output layer, a convolutional layer, receiving the plurality of multimodal input images of a region of interest in a body from at least one scanner machine, at the input layer of trained deep neural network (DNN) having a plurality of weights, processing the plurality of multimodal input images for the trained deep neural network (DNN), and generating a plurality of predicted mean SD-PET images and variance SD-PET images by the trained deep neural network at the output layer using a plurality of weights of the trained deep neural network.
- the plurality of multimodal input images comprise a plurality of low dose PET (LD-PET) images and a plurality of multimodal anatomical images.
- the plurality of multimodal anatomical images may comprise a plurality of MRI images or CT images.
- the processing the plurality of multimodal input images comprises the steps of (i) correcting attenuation in the plurality of LD-PET images using the plurality of multimodal anatomical images, (ii) reconstructing attenuation-corrected LD-PET images to generate three-dimensional (3D) image data, (iii) extracting an image data from the three-dimensional (3D) image data of the plurality of multimodal input images to the deep neural network (DNN) for training, and (iv) providing the extracted image data to the trained deep neural network.
- DNN deep neural network
- a system for automatically synthesizing standard-dose positron emission tomography (SD-PET) images from a multimodal input images comprises at least one scanner machine, and at least one computing unit connected with the at least one scanner machine for processing.
- the at least one computing unit comprises at least one processing unit and a memory unit.
- the at least one processing unit is configured to perform the steps comprising: training at least one deep neural network (DNN) using a training data set, wherein the deep neural network (DNN) comprises an input layer, an output layer, a convolutional layer, receiving the plurality of multimodal input images of a region of interest in a body from at least one scanner machine, at the input layer of trained deep neural network (DNN) having a plurality of weights, and processing the plurality of multimodal input images for the trained deep neural network (DNN) as described in the method of previous aspect and generating a plurality of predicted mean SD-PET images and variance SD-PET images by the trained deep neural network at the output layer using a plurality of weights of the trained deep neural network.
- the plurality of multimodal input images comprise a plurality of low dose PET (LD-PET) images and a plurality of multimodal anatomical images.
- Figure 1A shows a suDNN framework for automatically generating SD-PET images using the multimodal input images in accordance with an exemplary embodiment of the present invention.
- Figure IB illustrates a flow chart showing a method for automatically generating SD-PET images from LD-PET images in accordance with an exemplary embodiment of the present disclosure.
- Figure 1C illustrates a flow chart showing a process of training the deep neural network (DNN) using the training data set in accordance with an exemplary embodiment of the present disclosure.
- Figure ID illustrates a system for automatically generating SD-PET images from LD-PET images in accordance with an exemplary embodiment of the present disclosure.
- Figure IE illustrates a schematic of the deep neural network (DNN) in accordance with an exemplary embodiment of the present disclosure.
- Figure 2 illustrates a suDNN network architecture in accordance with an exemplary embodiment of the present invention.
- Figure 3 illustrates the predicted images from different methods across three different variations of the LD-PET data for a representative subject.
- Figures 4 (a) - 4(b) illustrate quantitative evaluation of all the methods 3 different levels of degradation of the input PET data: LD-PET, vLD- PET, and uLD-PET.
- Figure 5 illustrates visual inspection of the zoomed ROIs of the input, reference, and predicted images with for the case of uLD-PET.
- Figure 6 illustrates qualitative evaluation of the methods for three additional types of OOD data.
- Figure 7 illustrates quantitative evaluation of the methods for three additional OOD data.
- Figure 8 illustrates quantitative evaluation of the DNNs in the ablation analysis for the input PET images of LD-PET, vLD-PET, and uLD-PET.
- Figure 9 illustrates visual comparison of the output SD-PET images from the ablated suDNN versions suDNN and the proposed suDNN for the input PET images.
- Figure 10 illustrates utility of uncertainty maps produced by the present method with the inputs uLD-PET and LD-PET.
- Figure 11 illustrates feature maps obtained from initial layers of the disclosed network with unimodal (PET) and multimodal inputs.
- a system and method for automatically synthesizing SD-PET images from multimodal data that exploits the underlying physics of the imaging are disclosed.
- the system uses a novel deep neural network (DNN) framework architecture that also automatically models the per-voxel hetero scedasticity in the training data to automatically predict SD-PET images from LD-PET images (DRF 180) and improve robustness to OOD acquisitions of PET data.
- DNN deep neural network
- the present invention models both imaging physics as well as uncertainty for the task of medical image synthesis using DNNs.
- the present invention particularly discloses a sinogram and uncertainty- based deep neural network (suDNN) framework.
- the present disclosure discusses a model along with the network architecture and a training strategy of the suDNN framework for estimating SD- PET images using the multimodal input images.
- the model of estimation let random fields U LD and U SD model the acquired LD-PET and SD-PET images, respectively, across the population.
- random fields V T1 and V T2 model the acquired Tlw and T2w MRI images, respectively.
- the PET and MRI images ( U LD , V T1 , V T2 , and U SD ) are spatially co-registered to a common coordinate frame, where each image contains K voxels.
- the present invention configures the system to learn the suDNN by relying on a multimodal image-to- image translation framework incorporating a dropout-based statistical model, for improved regularization during learning, involving a Bernoulli random variable with parameter f.
- the suDNN models a stochastic regressor parameterized by weights 0 and the dropout probability parameter , such that characterize distributions on the SD- PET images and on their associated per-voxel uncertainties, respectively.
- suDNN learns the regressor using the training set comprising images from N subjects.
- FIG. 1A shows a suDNN framework for automatically generating SD-PET images using the multimodal input images in accordance with an exemplary embodiment of the present invention.
- the present invention discloses a DNN model based on a U-Net architecture.
- the present suDNN differs from the standard U-Net by incorporating: (i) multimodal input where the data from the PET, Tlw MRI, and T2w MRI images are treated as different channels, (ii) a 2.5D-style, where the estimation of a particular slice in the SD-PET image takes as input, from each modality, a collection of slices in the neighborhood, (iii) a dual-head output (as shown in Figure 1), where the output from one DNN head represents the predicted SD-PET images, and the output from the other head represents the per- voxel variances modeling the variability in the predicted SD-PET images, and (iv) a dropout model, following its bottleneck layer, for regularization during learning.
- suDNN models the mapping where a single convolutional backbone represented by parameterized by and the Bernoulli random variable parameterized by p, feeds the resulting latent features to the two disjoint output heads, i.e., one for representing the predicted images denoted by the mapping and the other for representing the variance images denoted by the mapping -
- MSE mean squared error
- the present invention provides a loss function that explicitly adapts to the hetero scedasticity of the per- voxel residuals between the output PET image and the high-quality PET image U SD .
- empirical evaluation shows that such a model leads to the robustness of the learned model to OOD PET test data.
- the output of suDNN is modelled as a pair consisting of (i) the predicted SDPET images and (ii) the images modeling the per-voxel variances in the residuals between the predicted images and the reference SD-PET image.
- An alternate interpretation for the values in and stems from the notion of a DNN that outputs a family of images modeled by a Gaussian distribution, where models the per-voxel means and model the per-voxel variances.
- the present invention provides loss functions that enforce similarity in two domains, i.e., (i) the spatial domain and (ii) the sinogram domain modeling the PET detector geometry. It is observed that incorporation of the transform-domain (sinogram-domain) loss and modeling the per-voxel heteroscedasticity in both domains make the present invention robust to OOD acquisitions.
- the overall loss function of the suDNN is a weighted combination of two loss functions, i.e., (i) uncertainty-aware loss in the image space and (ii) uncertainty-aware PET- physics-based loss in the sinogram space
- Uncertainty-Aware Spatial-Domain Loss is determined as below.
- X i For a given multimodal input image X i , let represent the k-th voxel in the spatial domain for the i-th predicted image , and let represent the fc-th voxel for the i-th predicted variance image .
- the present disclosure models a Gaussian likelihood for the observed image U SD in the image space, parameterized by per-voxel means in and per- voxel variances in .
- the negative of the log- likelihood function leads to the loss over the training-set T and the image space loss is given by: where is a small constant for numerical stability.
- N denotes the number of training samples
- K the number of voxels in each image, and represents expectation under the Bernoulli probability distribution characterizing
- Equation 2 consists of two components: (i) the per-voxel squared residual/error scaled down by the variance , and (ii) the penalty term log ( on the per-voxel variance .
- the penalty term penalizes large values of .
- Positivity is applied on the elements of the suDNN outputs using ReLU activation function in the final layer of the head modeling ⁇ Y . Further, positivity of is applied by employing an exponentiation layer as the final layer of ⁇ C .
- the suDNN learning does not require explicit supervision in the form of ground-truth observations for , but rather learns to map to using the loss in Equation 2 using the SD-PET image data U SD .
- loss is modelled in a sinogram space.
- operator model the linear sinogram transformation associated with PET image acquisition for each transaxial slice.
- Operator £ takes a 2D image with K voxels and produces a sinogram with L discrete elements.
- the per-voxel residual in the image space is modeled using a Gaussian distribution, the per-element residual in the sinogram space also follows a Gaussian distribution.
- the covariances between the per-voxel residuals are excluded in the sinogram domain resulting from the dependencies introduced by the sinogram operator .
- the resulting physics- based loss term in the sinogram domain as: where is a small positive real-valued constant for numerical stability.
- ⁇ in the present DNN, the overall loss function consisting of uncertainty -aware loss functions is minimized in both the image and the sinogram- space given by: where ⁇ is a non-negative real-valued free parameter that controls the weight of the physics-based sinogram-domain loss.
- the value of ⁇ is tuned using a validation set. For example, the value of ⁇ is tuned to 5e -3 .
- FIG. IB illustrates a flow chart showing a method for automatically generating SD-PET images from LD-PET images in accordance with an exemplary embodiment of the present disclosure.
- the method comprises the steps of training at least one deep neural network (DNN) using a training data set.
- the deep neural network (DNN) comprises an input layer, an output layer, a convolutional layer, a bottleneck layer, a dropout layer, and a fusion layer.
- the plurality of multimodal input images of a region of interest in a body are received from at least one scanner machine, at the input layer of trained deep neural network (DNN) having a plurality of weights.
- the plurality of multimodal input images comprise a plurality of low dose PET (LD-PET) images and a plurality of MRI images.
- the plurality of multimodal input images is further processed for the deep neural network (DNN).
- the processing step comprises: (i) correcting attenuation in the plurality of LD-PET images using the plurality of MRI images, (ii) reconstructing attenuation-corrected LD-PET images to generate three-dimensional (3D) image data; and (iii) extracting an image data from the three-dimensional (3D) image data of the plurality of multimodal input images to the deep neural network (DNN) for training.
- the method may comprise the step of providing the extracted image data to the trained deep neural network and generating a plurality of predicted mean SD-PET images and variance SD-PET images by the trained deep neural network at the output layer.
- the extracted image data comprises a 2.5D image data.
- Figure 1C illustrates a flow chart showing a process of training the deep neural network (DNN) using the training data set in accordance with an exemplary embodiment of the present disclosure.
- the process of training the deep neural network (DNN) using the training data set comprises the steps of receiving and storing the training data set in the computing unit to train the deep neural network (DNN) having a plurality of weights.
- the training dataset comprises a plurality of low dose PET (LD-PET) training images, a plurality of T1 weighted (Tlw) MRI training images, a plurality of T2 weighted (T2w) MRI images, and a raw training data of SD-PET with a plurality of standard-dose Positron Emission Tomography (SD-PET) training images and sensor data.
- LD-PET low dose PET
- Tlw T1 weighted
- T2w T2 weighted
- SD-PET standard-dose Positron Emission Tomography
- training comprises receiving the training data set at the input layer of the deep neural network (DNN), using the plurality of LD-PET training images, the plurality of Tlw MRI training images and the plurality of T2w MRI training images as input images, training the deep neural network (DNN) by processing the input images as similar to the processing discussed in Figure IB, using the plurality of weights.
- the processing step comprises: (i) correcting attenuation in the plurality of LD-PET training images using the plurality of MRI training images, (ii) reconstructing attenuation-corrected LD-PET training images to generate three-dimensional (3D) image data; and (111) extracting an image data from the three-dimensional (3D) image data for further training.
- the extracted image data comprises a 2.5D image data.
- a raw output data of predicted SD-PET with a plurality of predicted SD-PET output mean images and a plurality of predicted variance output images are generated using the deep neural network (DNN) by using the plurality of weights of the DNN.
- the raw output data of predicted SD-PET with the plurality of predicted SD-PET output images are compared with the raw training data, with the plurality of SD-PET training images as reference images and a quality and a loss of the predicted SD-PET output images are determined.
- the values of weights of the deep neural network (DNN) is stored when it is determined that the network is converged or the evaluated loss is less than a threshold value.
- the weights of the deep neural network is updated when it is determined that the network is not converged or till the evaluated loss reaches a value less than the threshold value.
- the method further comprises the step of generating a plurality of residual images between the plurality of predicted SD-PET output images and reference images by the deep neural network (DNN).
- the plurality of multimodal input images may be real-time images of the region of interest in the body registered using at least one scanner machine.
- the region of interest in the body is selected from the group including, but not limited to brain, lungs, liver, spleen, kidney, lymph nodes, intestine, stomach, breast, prostate, testicle and uterus.
- the system (100) comprises at least one scanner machine (101), at least one console (105) connected with the at least one scanner machine, at least one controller (107) connected with scanner machines for scan control, at least one computing unit (110) connected with controller and scanner machines for data/image processing, a database (115) for storing and/or accessing the data and a server (120) connected with the computing unit and other physical units of the system.
- the scanner machine is a simultaneous PET-MRI scanner comprising both MRI components for MRI scanning and PET components for PET imaging.
- multiple scanner machines comprising one or more MRI/CT scanners and one or more PET scanners are used.
- the computing unit may comprise one or more processing units configured to perform multiple processes for the system.
- the system comprises one or more controllers used for controlling the scanners.
- the controller may be a computing unit.
- number of controllers or computing units may depend on the number of scanners used.
- one or more console units for operating the scanners may depend on the number of scanners used.
- Console units and controllers may be located in a scan planning room.
- the server is connected with the computer wirelessly or through a wired medium.
- the server is a local server connected with the computing unit via cables.
- the server may be a remote server connected with the computing unit via a wireless network.
- the wireless network may be not only limited to a WiFi, WLAN, Bluetooth, or any short-range network.
- the server may be a cloud server.
- the system comprises two or more computing units and two or more servers connected with computing unit.
- the computing unit is a device not only limited to a computer comprising a central processing unit, a mobile device, a portable device, a laptop, a palmtop, a tablet or any other suitable computing devices.
- the server may comprise a plurality of databases and/or memory units that are maintained and used for storing the PET training images, MRI training images.
- the database may store the training dataset of images.
- the server may comprise one or more processing units configured to perform multiple processes for the system.
- controllers may comprise one or more processing units and memory units.
- processing units of the computing unit are configured to perform various steps of a method for automatically synthesizing SD-PET images from a plurality of multimodal input images and automatically predicting SD-PET images from LD-PET images and robustness to OOD acquisitions of PET data.
- the plurality of multimodal input images of a region of interest in a body are received from at least one scanner machine, at the input layer of trained deep neural network (DNN) having a plurality of weights.
- the plurality of multimodal input images comprise a plurality of low dose PET (LD-PET) images and a plurality of MRI images.
- LD-PET low dose PET
- the plurality of multimodal input images are real-time images of the region of interest in the body registered using at least one scanner machine.
- the plurality of multimodal input images are retrieved from out-of-distribution (OOD) or unseen data set stored in a database.
- OOD out-of-distribution
- the plurality of input PET images can be obtained from any suitable dose.
- the plurality of input PET images can be obtained from a scanner, such as an MRI scanner or a CT scanner.
- the plurality of input multimodal anatomical images can be obtained from a scanner, such as an MRI scanner or a CT scanner.
- any suitable scanner can be used to obtain plurality of input PET images and the plurality of input multimodal anatomical images.
- the suitable scanner may be, but are not limited to, a CT, PET, or MRI, or any combination of two modalities such as PET/CT, PET/MRI as non-limiting examples. This allows scanning of the region of interest in combination with up to two additional modalities. Functional and structural as well as other modalities can then be combined to produce a more complete image of the region of interest.
- processing units of controller may perform various steps of the method of the present invention.
- server may be configured to perform various steps of the method of the present invention.
- Figure IE illustrates a schematic of the deep neural network (DNN) in accordance with an exemplary embodiment of the present disclosure.
- the deep neural network comprises an input layer, an output layer, and a convolutional layer.
- the deep neural network (DNN) further comprises a bottleneck layer, a dropout layer, and a fusion layer.
- the deep neural network (DNN) comprises additionally an encoder and a decoder.
- the present invention uses a suDNN network architecture as shown in Figure 2.
- the suDNN fuses the multimodal LD-PET and MRI images to produce a single channel fused image using a 1 x 1 convolutional layer.
- a U-net architecture comprising of an encoder and a decoder is employed with a bottleneck layer in the middle.
- the encoder and decoder share a symmetric structure (with skip connections across encoder and decoder) each consisting of 3 convolutional blocks, that downsamples the feature-maps by a factor of 2, using max-pooling.
- the present architecture uses standard batch- normalization and ReLU activation function after every convolutional layer.
- a dropout layer characterized by
- the output of the penultimate convolutional block in the decoder is fed into two identical convolutional blocks, one for predicting the mean and one for predicting the variance .
- PET and MRI data were collected from a cohort of 28 healthy individuals (volunteers with mean age 19.6 years and standard deviation 1.7 years, including 21 females) on a 3T Siemens Biograph mMR system. The average dose administered for each subject was ⁇ 230 MBq.
- the SD-PET image was reconstructed using counts obtained over a duration of 30 minutes, starting 55 minutes after the administration of the tracer.
- the total number of useful counts over the 30-minute duration used for reconstruction of the SDPET image was around 600x10 6 .
- To simulate the LD-PET data around 3.4x10 6 counts was randomly selected, spread uniformly over the scan duration, resulting in a DRF of around 180 x.
- pseudo- CT maps generated using the UTE images were employed.
- Both the SDPET and LD-PET images were reconstructed using a proprietary software using ordinary-Poisson ordered- sub set expectation- maximization (OP-OSEM) method with 3 iterations and 21 subsets, along with point spread function (PSF) modeling and a post-reconstruction Gaussian smoothing.
- the software produced reconstructed PET images of voxel sizes 2.09 x 2.09 x 2.03 mm 3 .
- the voxel size for the reconstructed MRI images was 1 mm 3 isotropic.
- the LD-PET, SD-PET and the T2w MRI images were registered (using rigid spatial transformations) and resampled to the Tlw MRI image space using ANTS.
- the performance of the suDNN of the present invention was evaluated in comparison to recently proposed five other DNN -based methods for SD-PET prediction. For a fair comparison across methods, a 2.5D-style training scheme is used. To produce a predicted image for a given slice, five slices as the input of the DNN (one central, two above, and two below) was used. The five baseline methods denoted as M1 to M5, are given as follows:
- conditional DIP method denoted as ‘M1’
- M1 an unsupervised method based on conditional deep image prior proposed in Cui et al, titled “PET image denoising using unsupervised deep learning” was used.
- the method is unsupervised, the method M1 does not rely on any training data.
- the input to the DNN is the structural MRI image.
- the validation set was used to tune the optimal number of epochs, to maximize the structural similarity index (SSIM) between the predicted PET image and the reference SD-PET image.
- SSIM structural similarity index
- M2 For a Unimodal ResNet with perceptual loss, denoted as ‘M2’, method is similar to the framework proposed in Gong et al, titled “PET image denoising using a deep neural network through fine tuning '1 was used.
- the method M2 uses the PET image (unimodal) as input, with a standard ResNet architecture, and employs a perceptual loss based on features obtained from a VGG network trained on natural images.
- M5 Multi-channel GAN with fused input
- the multi-channel input consists of PET and multi-contrast MRI images. Due to the non-availability of diffusion-weighted MRI images for the dataset, only the Tlw and T2w MRI images were used for training.
- the M5 framework initially models a learnable 1 x 1 convolution layer to produce a fused image that becomes the input to the generator of the GAN.
- the present invention uses a 2.5D U-Net-based architecture for the generator. While methods M3-M5 were disclosed to achieve DRFs in the range 4- 200, M1 and M2 do not focus on dose reduction, but on denoising instead. DNNs M1, M3, M4, M5, and suDNN employ similar U-Net-based backbone architecture with comparable number of parameters. On the other hand, M2 employs a ResNet as described above, with significantly more number of parameters compared to other DNNs. For all the DNNs that necessitate a training stage (M2-M5 and suDNN), the present invention uses the same training-validation-testing split. The hyperparameters for all the DNNs are tuned using the validation set.
- All the DNNs are trained with a decaying learning rate for about 500 epochs. In practice, it is observed that that all the models converged within 300-400 epochs. For each DNN, a model providing the best performance (SSIM) on the validation set is selected. For quantitative evaluation of predicted SD-PET images across all methods, two parameters were used: (i) peak signal to noise ratio (PSNR), and (ii) structural similarity index (SSIM), with respect to the reference SD-PET images.
- PSNR peak signal to noise ratio
- SSIM structural similarity index
- the invention uses the training set of LD-PET images from a single cohort as discussed above. Primarily, the performance of all the methods on the testing set of LD-PET images was evaluated. In a practical setting, even with a fixed scanner and imaging protocol (i.e., the acquisition schemes for MRI contrasts and the radiotracer used for PET), various factors are contributing to OOD data, e.g., variation in photon count statistics due to slight variations in the injected dose, physiological factors like body mass index (BMI), aging brain, pathology. In addition to the above, several other factors can contribute to OOD data, as described in above.
- BMI body mass index
- OOD data arising from several acquisition scenarios: (i) variation due to reduced photon counts (reduced SNR), (ii) variation due to patient motion, (iii) variation due to pathology (Alzheimer's disease) and age, and (iv) variation due to PET and MRI data acquired using separate scanners or different imaging protocols.
- the LD-PET images were used for training all the DNNs.
- OOD PET data is generated by varying the photon counts and the associated SNR in the sinogram space, followed by OSEM reconstruction with post reconstruction Gaussian smoothing.
- two additional sets of test data are generated, at increasing degradation levels of the input LD-PET data, namely very lowdose (vLD-PET) and ultra-low -dose (uLD-PET).
- the LD-PET images of the subjects may be stored in the databases of the server.
- the OOD test set consisting of vLD-PET and uLD-PET was generated as follows, (i) the intensities of LD-PET image is scaled down, (ii) The LD-PET images were retrospectively forward-projected using the system matrix S,
- the present invention uses the projection model from STIR based on a ray-tracing algorithm for the system geometry, which is similar to that used in the Siemens PET-MRI system used in this disclosure.
- the PSNR value, averaged across the test set, between the reference SD-PET image and LD-PET image was around 21 dB.
- the LDPET images were scaled such that, after OSEM reconstruction, the PSNR values, averaged across the test set, between the reference SD-PET and vLD-PET image was around 17 dB; the PSNR for the uLD-PET image was around 13 dB.
- the PSNR values for the set of uLD-PET images was around 0.66 times that of the LD-PET images. This variation in the PSNR values was motivated by the conventional work reported in Watson et al. (2005) titled “Optimizing injected dose in clinical PET by accurately modelling the counting- rate response functions specific to individual patient scans” that gives an example, where the PET images' mean SNR reduced by a factor of around 0.66 when the patients' body weight increased from around 60 kg to around 120 kg.
- OOD data from different imaging protocols was used corresponding to the visual task experiments used for functional PET analysis.
- the dataset comprises Tlw MRI, T2w MRI, and dynamic PET scans from six healthy sub-jects with mean age 24.3 years and standard deviation 3.8 years, including five females.
- the scanner and MRI structural imaging protocols are the same as the data used for training all the DNN models.
- the scanning protocol involved bolus injection of 100 MBq of the radiotracer, which is significantly different from the training data (described above).
- the reconstructed PET images using the entire list-mode data were considered as the reference PET image.
- a lower-quality PET image (input PET image) was generated by using a part of the list-mode data such that the PSNR value, averaged across the entire dataset, between the reference PET image and input image was around 24 dB.
- OOD data from motion artifacts was used.
- OOD-Motion the data corresponding to "Motion Controlled Study” was used.
- FDG PET and structural MRI (Tlw and T2w) data were acquired from a healthy volunteer.
- FDG-PET a bolus of 110 MBq FDG was provided, and specie instructions pertaining to the head movement were provided at specie scan times.
- the reconstructed images using the (i) entire list-mode data and (ii) motion correction algorithm were considered as the reference PET image.
- the lower-quality PET image (input PET image) was generated by using part of the list-mode data. Importantly, any motion correction during or post-reconstruction were not performed in the present invention.
- OOD data from ADNI Alzheimer's Dementia; cross- scanner; multi-site; aged population data.
- a dataset was obtained from the Alzheimer's disease neuroimaging initiative (ADNI) database, which is a well- known publicly available dataset. Data for 25 subjects (mean age 77 years and standard deviation 10.1 years, including 9 females) was randomly selected and categorized as follows, (i) normal aging (2 subjects), (ii) early mild cognitive impairment (EMCI, 4 patients), (iii) mild cognitive impairment (MCI, 4 patients), (iv) late mild cognitive impairment (LMCI, 8 patients), and (v) dementia or AD (7 patients).
- ADNI Alzheimer's disease neuroimaging initiative
- Tlw, T2w, and [18-F] FDG PET images for all the subjects mentioned above were obtained.
- the structural MRI images were acquired on a 1.5T and 3T scanners using 3D MPRAGE for Tlw and FLAIR for T2w images with a resolution of 1 mm 3 isotropic. All the PET images were obtained at a resolution of 1.01 x 1.01 x 2.02 mm 3 .
- the LDPET and SD-PET data from OOD-Counts used for training the DNNs were acquired on a 3T simultaneous PET-MRI scanner with a resolution of 1 mm 3 isotropic for MRI and 2.09 x 2.09 x 2.03 mm 3 for PET.
- the degraded input images are retrospectively generated such that the PSNR value, averaged across the OOD- ADNI test set, between the reference PET image and the degraded input PET image was around 21 dB.
- FIG 3 illustrates the predicted images from different methods across three different variations of the LD-PET data for a representative subject.
- the input PET images LD-PET, vLDPET, and uLD-PET are shown in Figures 3 (a2), (b2), and (d2), respectively, with Figures 3 (a1), (b1), and (d1) showing the corresponding sinograms.
- the DIP -based M1 ( Figure 3 (a3), (b3), (d3)), denoises the input LD-PET image.
- M1 performs poorly in predicting the FDG uptake in the reference SD-PET image.
- the ResNet- based M2 ( Figure 3 (a4), (b4), (d4)), is designed to predict the activity in the reference SD-PET image.
- M2 is unable to produce images that are smoothly varying, with textural features.
- the method M2 relies on a standard ResNet architecture, employing short-range skip connections compared to longer-range, hierarchically designed skip connections in the U-net architecture.
- Methods M3 ( Figure 3 (a5), (b5), (d5)) and M4 ( Figure 3 (a6), (b6), (d6)), relying on predicting the residual images as output, produce realistic SDPET images with LD-PET as input.
- both the methods show some residual noise in the images, despite reasonably recovering the contrast and texture similar to the SD-PET image.
- the method M4 improves over the loss in contrast shown by M3, emphasizing the contribution of the multi-modal MRI input.
- the method M5 shows superior performance with LD-PET ( Figure 3 (a7)), showing little degradation (in terms of contrast and certain structures like the sulci and gyri) with vLD-PET ( Figure 3 (b7)), and does not predict desired texture as well as contrast at uED-PET ( Figure 3 (d7)).
- the present suDNN method shows superior prediction across varying levels of input ( Figure 3 (a8),(b8), and (d8)).
- the ground-truth SD-PET along with the corresponding sinogram are shown in the topmost row in figure 3.
- the method of the present invention shows more realistic texture and contrast, and reduced magnitude in residual images (Figure 3 (c6) and (e6)), in comparison to other baselines.
- the sinograms of the predicted images are provided in Figures 3 (a9), (b9), and (d9).
- the sinograms demonstrate little difference in appearance in comparison to each other. The different appearance is in accordance with the image quality of the predicted images obtained with different low-dose inputs.
- the residual images between the sinograms of the predicted images and that of the reference image SD-PET corresponding to the inputs vEDPET and uLD-PET is shown in Figure 3 (c7) and (e7).
- FIGS. 4 (a) - 4(b) show quantitative plots with peak signal to noise ratio (PSNR) and structural similarity index (SSIM) values averaged for over 100 slices of every subject (brain) from the test set in 3-fold cross validation (18 patients for training, 4 for validation, 6 for testing) for different PET inputs: LD-PET, vLD-PET, and uLD-PET.
- PSNR peak signal to noise ratio
- SSIM structural similarity index
- the performance of suDNN is comparable to M3-M5, nevertheless, as the input degrades, the present invention significantly outperforms all other methods (M1-M5) demonstrating substantially higher robustness/insensitivity to OOD data.
- a paired t-test was conducted for SSIM and PSNR values for all methods for the three low-dose inputs. The improvement using the present suDNN method was found to be statistically significant ( value less than 0:001) in comparison to all other methods (M1-M5) at all inputs LDPET, vLD- PET, and uLD-PET.
- FIG 5 illustrates visual inspection of the zoomed ROIs of the input, reference, and predicted images with for the case of uLD-PET.
- the zoomed region of interest (ROI) includes the caudate, putamen, and thalamus.
- the caudate nucleus shows hyperintensity in the SD-PET image (highlighted using the white arrow in Figure 5 (a4)) that is not visible in the case in the uLD-PET image ( Figure 5 (a3), (b3)).
- the unimodal DNN M3 ( Figure 5 (cl), (d1)) severely underestimates the uptake in the caudate and the thalamus regions.
- the present suDNN ( Figure 5 (c4), (d4)) provides the best estimate of the predicted images
- other multimodal DNN methods M4, M5 ( Figure 5 (c2), (d2) and (c3), (d3)) do show some recovery of the hyperintensity in the caudate and thalamus regions compared to M3.
- the above demonstrates the importance of including the MRI structural images ( Figure 5 (al), (b1) and (a2), (b2)) that distinctly show the subcortical nuclei in the cerebrum.
- FIG. 6 illustrates Qualitative evaluation of the methods for three additional types of OOD data: OOD-Protocol (rows a and b), OOD-Motion (rows c and d), and OOD-ADNI (rows e and f ).
- Figure 6 shows the predicted PET images and residuals for the models M3, M4, M5, and suDNN on three additional OOD datasets OOD-Protocol, OOD-Motion, and OOD-ADNI.
- M3 Figure 6 (a2)
- M4 Figure 6 (a3) shows increased activity across the entire brain.
- suDNN ( Figure 6 (a5)) closely matches the activity distribution across brain regions without severe under- or over- estimation, yielding the least residual magnitudes (Figure 6 (b4)).
- both M3 and M4 ( Figure 6 (c2) and (c3)) show increased activation across the entire brain region and are also unable to recover certain anatomical structures (e.g., caudate nuclei).
- suDNN (Figure 6 (c5)) is able to closely match the activity distribution across brain regions.
- suDNN (Figure 6 (e5)) provides substantially improved images than other methods ( Figure 6 (e2)-(e4)) with the least residual magnitudes.
- FIG. 7 illustrates Quantitative evaluation of the methods for three additional OOD data: OOD-Protocol (6 subjects), OOD-Motion (1 subject), and OOD-ADNI (25 subjects).
- Figures 7(a)-(b) show quantitative plots with PSNR and SSIM values for 100 slices of every subject for each of the three additional OOD datasets: OOD-Protocol, OOD-Motion, and OOD-ADNI.
- the dotted lines in both the plots indicate the median PSNR and SSIM values of suDNN evaluated on LDPET dataset (part of OOD-Counts) from Figure 4.
- the present invention suDNN performs significantly better (around 4 dB for OOD-Protocol, and around 1.5 dB for OOD-Motion and OOD- ADNI) than M3, M4, and M5.
- the performance of the disclosed suDNN is comparable to its corresponding performance on the LD-PET test data (in OOD-Counts).
- a similar trend can be observed in the SSIM plot ( Figures 7(b)). While the present invention on OOD-Protocol shows comparable SSIM values compared to LD-PET data in OOD-Counts, it shows a slight degradation of around 0.1 and 0.2 for OOD-Motion and OOD-ADNI, respectively.
- a 2.5D Unimodal U-net (Al) a basic DNN having a U-net architecture similar to Xu et al, titled “200x low-dose PET reconstruction using deep learning", with a unimodal input but with a modified output is used such that the DNN predicts the PET image instead of the residual between the input LD-PET and the reference SD-PET image (as in M3).
- Al is trained using the 2.5D scheme, penalizing the mean-squared error in the image space, ), between the predicted and the reference images.
- the DNN Al is modified by replacing the unimodal input with a multimodal input including multi-contrast MRI images, with a fused layer (as in suDNN), retaining the same loss function as Al
- the DNN includes a learned manifold-based loss similar to the perceptual loss or manifold-based loss reported in some recent methods.
- the total loss is controls the weight of the loss term .
- the learned-manifold based loss relies on learning an autoencoder trained using the set of SD-PET images.
- the loss function penalizes the differences between the encodings obtained by applying the encoder (from learned autoencoder) to the predicted PET and reference SD-PET images.
- the loss function where represents the Frobenius tensor norm.
- the DNN uses a sinogram-space loss instead of the learned-manifold loss (in A3) that .
- the free parameters ( ⁇ E , ⁇ S ) were automatically tuned to (2e-3, 3e-3) using the validation set. For example, in the present invention, the value of ⁇ E is taken as 0.002 and the value of is taken as 0.003.
- FIG 8 illustrates quantitative evaluation of the DNNs in the ablation analysis for the input PET images of LD-PET, vLD-PET, and uLD- PET.
- two other intermediate levels of degradation of LDPET were included. The image quality of those levels lies between that of (i) LDPET- vLD-PET, and (ii) vLD-PET-uLD-PET.
- DNNs with a multimodal input improve substantially over DNNs with unimodal input (suDNN, suDNN-Ablated2, and suDNN-Ablated2 and suDNN- Ablated2 better than suDNN-Ablated1).
- Inclusion of the learned manifold based loss in addition to the image space loss , for suDNN-Ablated3 provides improved robustness over suDNN-Ablated2 and suDNN-Ablated1.
- suDNN- Ablated4 that includes a physics-based loss instead of the learned manifold-based loss in suDNN-Ablated3 shows significant improvement over suDNN- Ablated3 with vLD-PET and uLD-PET.
- the disclosed suDNN models uncertainty estimation in both image and sinogram space, provides comparable performance to suDNN-Ablated4, but significantly better than suDNN- Ablated1, suDNN-Ablated2, and suDNN-Ablated3 at higher levels of degradation of the input.
- the predicted variance image c from the present suDNN can potentially be useful for quantifying the uncertainty in the predicted images.
- FIG 9 illustrates visual comparison of the output SD- PET images from the ablated suDNN versions suDNN-Ablated3, suDNN - Ablated4, and the proposed suDNN, for the input PET images (i) LD-PET, (ii) vLD- PET, and (iii) uLDPET.
- Results of DNNs in the ablation study with input PET images: LD- PET, vLD-PET, uLD-PET are shown in figure 9.
- Figure 9(a1)-(a3) are variations in input PET are of LD-PET, vLD-PET, uLD-PET, respectively.
- Figure 9(b1)-(b3) are predicted images using varying levels of PET input from suDNN
- Figure 9(c1)-(c3) are from suDNN-Ablated4
- Figure 9(d1)-(d3) are from suDNN- Ablated3, respectively.
- the predicted PET images from suDNN-Ablated3 are closer to that of suDNN-Ablated4 and suDNN ( Figure 9(cl),(b1) and Figure 9(c2),(b2)).
- suDNN-Ablated3 shows substantial degradation with uLD-PET as input ( Figure 9(d3)).
- Two global thresholds are defined, one for the predicted uncertainty image and another for the computed residual image ⁇ R , to identify pixels with high residuals and high uncertainty, respectively. While pixel locations with absolute residual values r > oR indicate sub-optimal reconstruction, pixel locations with indicate predictions with high uncertainty. Subsequently, two binary masks BM1 and BM2 were obtained by applying the threshold values, ⁇ R and ⁇ U , on r and d , respectively. Through empirical analysis, the values are fixed for the global thresholds ( ⁇ R , ⁇ U ) to be (0:3; 0:03), respectively.
- the high intensity values in the map Q1 corresponds with the high- intensity values in the map Q2. It implies that regions with high residual values correspond to regions with high uncertainty in the predicted images. In this way, the map Q2 (available at inference), might act as a proxy for the prediction error while inferring a PET reconstruction from test data. It is observed that the regions with large values in Q1 are subsumed within region with large values in Q2.
- the present invention discloses a novel uncertainty-aware DNN framework for the prediction of SD-PET images from uLD-PET images in simultaneous PET-MRI systems.
- the present invention shows the robustness to practical (OOD) degradations in the data.
- OOD practical
- the inclusion of the physics-based loss function provides robustness to OOD data.
- the present invention predicts SD-PET images with better accuracy and also quantifies the uncertainty in the predictions that can aid decision-makers in the clinics.
- the use of multi-contrast MRI images as multi-channel input provides a substantial improvement over unimodal PET-only inputs.
- the present invention does not require a very close match between training and testing data and provides more information regarding the quality of prediction made by the DNN. This allows inclusion of a wide variety of dataset for training without the necessity for re- training the DNN. Additionally, compared to state of the arts, the present invention is robust to OOD data arising from other factors such as differences in imaging protocol on another cohort, motion artifacts, age, pathology, inter-scanner variability.
- the present invention produces enhanced image quality of low-dose dynamic PET reconstructions and enables collection of PET data for brain research with very low dose of radiotracers. Further the present invention enables (i) faster imaging acquisitions using conventional doses of radiotracers permitting more diagnostic scanning procedures to be completed and higher revenue from the scanning service, and (ii) reduced doses of radiotracers which requires smaller quantities to be produced of the radioactive source and lower production costs of the of the radio tracers.
- the present methodology has showed improved robustness to newer PET OOD acquisitions and also provided the underlying uncertainty involved in the predictions, facilitating a paradigm of risk assessment in the application of DNNs to low dose PET image reconstruction.
- the method has the potential to dramatically improve the utility of ultra-low-dose PET imaging in diagnostic imaging, therapeutic monitoring, and in drug development research in oncology, neurology, and cardiology.
- Physics inspired deep neural net based reconstruction of ultra-low-dose PET scans has the potential to substantially expand the use of PET in longitudinal studies and imaging of radiation-sensitive populations including children and pregnant women.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Nuclear Medicine (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
La présente invention concerne un cadre conçu basé sur l'apprentissage profond pour synthétiser automatiquement des images de PET à dose standard (SD-PET), à partir d'une entrée multimodale comportant des images de PET à faible dose et des images anatomiques multimodales, avec une robustesse améliorée vis-à-vis de variations dans les données d'entrée. En plus d'apporter une précision améliorée par rapport à l'état de la technique, la présente invention fournit une quantification d'incertitude des images reconstruites. Ainsi, la présente invention assure une imagerie à faible dose dans les scanners PET avec une exigence réduite de dosage et en réduisant par conséquent l'exposition aux rayonnements ionisants, et assure également une imagerie plus rapide, ce qui a pour effet d'améliorer le débit des scanners PET. La présente invention concerne en particulier un cadre de réseau neuronal profond basé sur des sinogrammes et des incertitudes (suDNN). Les principaux avantages de la présente invention sont (i) la nette amélioration dans la précision des images reconstruites en présence de données hors distribution (OOD); (ii) la quantification de l'incertitude dans la prédiction d'images SD-PET.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IN202021036972 | 2020-08-27 | ||
| IN202021036972 | 2020-08-27 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022043910A1 true WO2022043910A1 (fr) | 2022-03-03 |
Family
ID=78085722
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2021/057826 Ceased WO2022043910A1 (fr) | 2020-08-27 | 2021-08-26 | Systèmes et procédés pour accentuer automatiquement des images de pet à faible dose avec robustesse vis-à-vis des données hors distribution (ood) |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2022043910A1 (fr) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114897752A (zh) * | 2022-05-09 | 2022-08-12 | 四川大学 | 一种基于深度学习的单透镜大景深计算成像系统及方法 |
| CN116091706A (zh) * | 2023-04-07 | 2023-05-09 | 山东建筑大学 | 多模态遥感影像深度学习匹配的三维重建方法 |
| CN116152502A (zh) * | 2023-04-17 | 2023-05-23 | 华南师范大学 | 一种基于解码层损失回召的医学影像分割方法及系统 |
| WO2025081362A1 (fr) * | 2023-10-18 | 2025-04-24 | 中国科学院深圳先进技术研究院 | Procédé et système d'amélioration d'image tep à faible dose |
-
2021
- 2021-08-26 WO PCT/IB2021/057826 patent/WO2022043910A1/fr not_active Ceased
Non-Patent Citations (9)
| Title |
|---|
| ALEX KENDALL ET AL: "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?", PROCEEDINGS OF THE 31ST CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NIPS 2017), 5 December 2017 (2017-12-05), XP055651632 * |
| CHEN KEVIN T ET AL: "Ultra-Low-Dose <18>F-Florbetaben Amyloid PET Imaging Using Deep Learning with Multi-Contrast MRI Inputs.", RADIOLOGY 03 2019, vol. 290, no. 3, March 2019 (2019-03-01), pages 649 - 656, XP055871477, ISSN: 1527-1315 * |
| CUI ET AL., PET IMAGE DENOISING USING UNSUPERVISED DEEP LEARNING |
| GONG ET AL., PET IMAGE DENOISING USING A DEEP NEURAL NETWORK THROUGH FINE TUNING |
| POZARUK ANDRII ET AL: "Augmented deep learning model for improved quantitative accuracy of MR-based PET attenuation correction in PSMA PET-MRI prostate imaging", EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, vol. 48, no. 1, 11 May 2020 (2020-05-11), pages 9 - 20, XP037345804, ISSN: 1619-7070, DOI: 10.1007/S00259-020-04816-9 * |
| SUDARSHAN VISWANATH P ET AL: "Towards lower-dose PET using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data", MEDICAL IMAGE ANALYSIS, OXFORD UNIVERSITY PRESS, OXOFRD, GB, vol. 73, 27 July 2021 (2021-07-27), XP086781461, ISSN: 1361-8415, [retrieved on 20210727], DOI: 10.1016/J.MEDIA.2021.102187 * |
| WATSON ET AL., OPTIMIZING INJECTED DOSE IN CLINICAL PET BY ACCURATELY MODELLING THE COUNTING-RATE RESPONSE FUNCTIONS SPECIFIC TO INDIVIDUAL PATIENT SCANS, 2005 |
| XIANG ET AL., 3D AUTO-CONTEXT-BASED LOCALITY ADAPTIVE MULTI-MODALITY GANS FOR PET SYNTHESIS |
| XU ET AL., 200X LOW-DOSE PET RECONSTRUCTION USING DEEP LEARNING |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114897752A (zh) * | 2022-05-09 | 2022-08-12 | 四川大学 | 一种基于深度学习的单透镜大景深计算成像系统及方法 |
| CN114897752B (zh) * | 2022-05-09 | 2023-04-25 | 四川大学 | 一种基于深度学习的单透镜大景深计算成像系统及方法 |
| CN116091706A (zh) * | 2023-04-07 | 2023-05-09 | 山东建筑大学 | 多模态遥感影像深度学习匹配的三维重建方法 |
| CN116152502A (zh) * | 2023-04-17 | 2023-05-23 | 华南师范大学 | 一种基于解码层损失回召的医学影像分割方法及系统 |
| CN116152502B (zh) * | 2023-04-17 | 2023-09-01 | 华南师范大学 | 一种基于解码层损失回召的医学影像分割方法及系统 |
| WO2025081362A1 (fr) * | 2023-10-18 | 2025-04-24 | 中国科学院深圳先进技术研究院 | Procédé et système d'amélioration d'image tep à faible dose |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2022043910A1 (fr) | Systèmes et procédés pour accentuer automatiquement des images de pet à faible dose avec robustesse vis-à-vis des données hors distribution (ood) | |
| Zaidi et al. | Comparative assessment of statistical brain MR image segmentation algorithms and their impact on partial volume correction in PET | |
| Somayajula et al. | PET image reconstruction using information theoretic anatomical priors | |
| US20200210767A1 (en) | Method and systems for analyzing medical image data using machine learning | |
| US8675936B2 (en) | Multimodal image reconstruction | |
| Zhang et al. | Spatial adaptive and transformer fusion network (STFNet) for low‐count PET blind denoising with MRI | |
| Sanaat et al. | DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms | |
| US12346998B2 (en) | Artificial intelligence (AI)-based standardized uptake value (SUV) correction and variation assessment for positron emission tomography (PET) | |
| KR20190101905A (ko) | 양전자방출 단층촬영 시스템 및 그것을 이용한 영상 재구성 방법 | |
| WO2022120731A1 (fr) | Procédé et système de conversion de modalité d'image tep-irm sur la base d'un réseau antagoniste génératif cyclique | |
| WO2021062413A1 (fr) | Système et procédé d'apprentissage profond pour problèmes inverses sans données d'apprentissage | |
| Wang et al. | An MR image‐guided, voxel‐based partial volume correction method for PET images | |
| Funck et al. | Surface-based partial-volume correction for high-resolution PET | |
| WO2024226421A1 (fr) | Systèmes et procédés de débruitage d'images médicales à l'aide d'un apprentissage profond | |
| Gray | Machine learning for image-based classification of Alzheimer's disease | |
| Onishi et al. | Self-supervised pre-training for deep image prior-based robust pet image denoising | |
| WO2023219963A1 (fr) | Amélioration basée sur l'apprentissage profond d'imagerie par résonance magnétique multispectrale | |
| US11151759B2 (en) | Deep learning-based data rescue in emission tomography medical imaging | |
| Lei et al. | Estimating standard-dose PET from low-dose PET with deep learning | |
| CN113469915A (zh) | 一种基于去噪打分匹配网络的pet重建方法 | |
| US11672492B2 (en) | Feature space based MR guided PET reconstruction | |
| Chen et al. | Temporal processing of dynamic positron emission tomography via principal component analysis in the sinogram domain | |
| Pauwels et al. | Deep Learning in Image Processing: Part 2—Image Enhancement, Reconstruction and Registration | |
| Serrano‐Sosa et al. | Multitask Learning Based Three‐Dimensional Striatal Segmentation of MRI: fMRI and PET Objective Assessments | |
| CN112508813A (zh) | 一种基于改进Kernel方法结合稀疏约束的PET图像重建方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21787495 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21787495 Country of ref document: EP Kind code of ref document: A1 |