[go: up one dir, main page]

WO2025128284A1 - Système et procédé de reconstruction de tomodensitométrie tridimensionnelle à partir de rayons x - Google Patents

Système et procédé de reconstruction de tomodensitométrie tridimensionnelle à partir de rayons x Download PDF

Info

Publication number
WO2025128284A1
WO2025128284A1 PCT/US2024/056343 US2024056343W WO2025128284A1 WO 2025128284 A1 WO2025128284 A1 WO 2025128284A1 US 2024056343 W US2024056343 W US 2024056343W WO 2025128284 A1 WO2025128284 A1 WO 2025128284A1
Authority
WO
WIPO (PCT)
Prior art keywords
volume
organ
image
generating
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/056343
Other languages
English (en)
Inventor
Arie Kaufman
Gaofeng DENG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Foundation of the State University of New York
Original Assignee
Research Foundation of the State University of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Foundation of the State University of New York filed Critical Research Foundation of the State University of New York
Publication of WO2025128284A1 publication Critical patent/WO2025128284A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present invention relates to 3D visualization using 2D image data and more particularly relates to systems and methods for producing a 3D visualization using a limited number of 2D X-ray images.
  • Computed Tomography has emerged as a pivotal imaging modality with a vast array of applications in fields ranging from medical diagnostics to industrial non-destructive testing.
  • CT scanners work by utilizing a rotating X-ray tube along with a series of detectors within a gantry which capture the variations in X-ray attenuation by different tissues inside the body.
  • These diverse X-ray measurements obtained from multiple perspectives, are then processed using tomographic reconstruction algorithms, such as filtered back-projection and iterative reconstruction, to produce 3D cross-sectional tomographic images.
  • This ability to image internal structures at millimeter resolution has revolutionized the way physicians perceive, understand, and diagnose subjects, in a non- invasive manner.
  • Neural Radiance Fields (NeRF), further described by Mildenahll et al. in the article “Nerf: Representing scenes as neural radiance fields for view synthesis," Communications of the ACM, 65(l):99-106, 2021, has improved scene representation and the field of photorealistic novel views synthesis from a sparse set of 2D images.
  • the improvements from NeRF has inspired new avenues of exploration, particularly in the context of novel view synthesis from limited images.
  • an image conditioned generalizable NeRF model referred to as pixelNeRF, has been proposed by Yu et al.
  • GANs Generative Adversarial Networks
  • Diffusion models have recently been used in a range of applications, including producing high-quality images, capturing complex data distributions, and mitigating uncertainty. These applications leverage several desirable properties of diffusion models. such as being relatively straightforward to define and efficient to train, distribution coverage, and a stationary training objective.
  • Diffusion models have found several applications in generative image synthesis and have demonstrated superior performance over GANs in unconditional generation. Diffusion models have also been show n to be excellent at modeling conditional distributions of images. For example. Saharia et al. have proposed Palette, a unified framework for image-to- image translation based on conditional diffusion models, which has exhibited exceptional performance in several tasks, such as image inpainting and colorization. See “Palette: Image- to-image diffusion models," ACM SIGGRAPH 2022 Conference Proceedings, pp. 1-10, 2022. Diffusion models are also used for image superresolution from low-resolution images. [0010] Diffusion models have also demonstrated efficacy in 3D generation tasks.
  • 3D visualization without the high dose of radiation associated with CT imaging.
  • a 3D visualization w ould provide anatomical shape, position, and spatial relation from a limited number of 2D X-rays.
  • Embodiments descnbed herein provide a system and method for three-dimensional computed tomography reconstruction from X-rays.
  • the present systems and methods can reduce the time for scanning and the amount of radiation dose in CT scanning, which is beneficial in both clinical and industrial settings.
  • embodiments of the present systems and methods integrate implicit neural representation and diffusion models to address the long-standing problem of CT reconstruction from few' X-rays.
  • the systems and methods provide conditional diffusion models to address blurriness caused by the ambiguity and uncertainty arising from the use of few input images.
  • the system and method then provide a method for CT volume representation. including a neural voxel feature field, and applies a diffusion model conditioned on the feature field to sample the CT volume.
  • a method of generating a 3D volumetric image from 2D X-ray images includes using data representing at least one X-ray image of a region of interest as input data.
  • the method includes extracting volume features from the at least one X-ray image and concatenating the extracted volume features with a noise source to generate a noisy target volume.
  • a process of iteratively denoising the noisy target volume by sampling the volume with a trained conditioned diffusion model is applied and the method concludes with reprojecting a 3D volume of the region of interest.
  • the 2D X-ray images include pose data and the step of reprojecting further comprises reprojecting the 3D volume with the pose data.
  • the method may further include a process of fine-tuning the reprojected 3D volume.
  • the region of interest is an organ
  • the method further comprises segmenting the organ volume from the 3D volume.
  • the method may include estimating properties of an organ, such as 3D shape and organ volume.
  • the organ is lung
  • method further includes steps to provide an estimate of total lung volume.
  • total lung volume is estimated based on the number of voxels within the segmented organ volume.
  • the present embodiments may provide an important supplement to current X-ray and CT imaging techniques and, in some cases, provide improvements on traditional CT reconstruction techniques from X-ray images using fewer X-rays and/or faster scanning. Additionally, the system and method used as a 3D visualization enhancement of the X- ray images, which are widely used in medical planning, 3D organ shape and volume analysis, diagnosis, non-medical security checks, and industrial n on-destructive inspections. [0018] Systems, methods, and non-transitory computer-readable media are provided. In one embodiment a method of generating a 3D volumetric image from 2D X-ray images includes providing data representing at least one X-ray image of a region of interest as input data.
  • Volume features are extracted from the at least one X-ray image and the extracted volume features are concatenated with a noise source to generate a noisy target volume.
  • the method includes iteratively denoising the noisy target volume by sampling the volume with a trained conditioned diffusion model. The method then reprojects the 3D volume of the region of interest.
  • the 2D X-ray images include pose data and the reprojecting operating further comprises reprojecting the 3D volume with the pose data.
  • the method may further include fine tuning the reprojected 3D volume.
  • the region of interest is an organ
  • the method further comprises segmenting the organ volume from the 3D volume.
  • the method further comprises estimating organ shape from the segmented organ volume.
  • total lung volume can be estimated.
  • lung volume may be estimated based on the number of voxels within the segmented organ volume.
  • FIG. 1 is a block diagram showing an exemplary’ system for generating 3D images from 2D X-ray images employing a conditional diffusion model
  • FIG. 3 is a flow chart illustrating an exemplary process for performing an inference phase of the present embodiments of systems and methods for generating 3D images from 2D X-ray images employing a conditional diffusion model;
  • FIGS. 4A-4D are graphs depicting changes in PSNR between input X-rays and reprojections, as well as the changes in PSNR, SSIM, and LPIPS between a refined volume and ground truth during iterative refinement in accordance with the present methods;
  • FIGS. 5 A-5F are a series of images illustrating LIDC ground truth images and reconstruction.
  • FIGS. 6A-6F are a series of images illustrating an exemplary reconstruction employing the present methods.
  • the embodiments described herein provide examples of systems and methods for three-dimensional computed tomography reconstruction from a limited set of conventional X- ray images
  • the presently disclosed systems and methods provide for 3D visualization from 2D X-ray images which may be useful when conventional CT scanning is not available and can reduce the time for scanning and the amount of radiation dose compared to CT scanning, which can be beneficial in certain clinical and industrial settings.
  • the present embodiments preferably apply a conditional diffusion model to reduce the ambiguity and uncertainty.
  • the present diffusion model is 3D aware and can sample a possible 3D volume from the distribution conditioned on the input X-rays.
  • This method improves the fidelity’ of the reconstructed volume and reduces the blurriness ty pically exhibited in the non-generative regression-based method.
  • the present embodiments offer 3D volumetric information to X-ray images while also preserving their original 2D information. To achieve this, an iterative refinement method to enforce the consistency between the inputs and reprojections can be employed.
  • Diffusion models or diffusion probabalilistic models, which are preferably used in the present embodiments, are a class of latent variable generative models used in machine learning systems.
  • a diffusion model typically consists of three major components: the forward process, the reverse process, and the sampling procedure.
  • the goal of a diffusion model is to leam a diffusion process that generates a probability distribution for a given dataset from which new images can be sampled.
  • Diffusion models leam the latent structure of a dataset by modeling the way in which data points diffuse through their latent space.
  • Diffusion models are typically formulated as Markov chains and trained using variational inference.
  • Diffusion models can be conditioned to generate an output based on an imposed condition rather than the whole distribution of input data.
  • a diffusion model trained on a broad corpus of images would typically generate images that look like a random image from that corpus.
  • a condition may be imposed, such as defining a category.
  • Conditioning typically requires converting the conditioning parameters into a vector of floating-point numbers which is applied to the underlying diffusion model neural network.
  • the present sy stems and methods include a training phase and an inference phase.
  • a set of data comprising input X-ray image and CT scan pairs is used to extract the volume features from the input X-rays.
  • the volume features are concatenated with noisy CT scans which are then denoised with a conditional diffusion model.
  • a CT volume is sampled with a diffusion model conditioned on the volume features extracted from the input X-rays.
  • an iterative refinement method can be used to minimize the distance between the input X-rays and resulting 2D reprojections.
  • Diffusion models sample from a distribution by reversing a forward diffusion process that gradually adds noise to the data.
  • the sampling process typically starts with Gaussian noise and produces gradually less noisy samples until reaching a final sample.
  • Conditional diffusion models make the denoising process conditional on the input signal in the form of
  • this problem can be preferably addressed by adapting denoising diffusion probabilistic models (DDPMs) for CT image generation conditional on input X-rays.
  • DDPMs denoising diffusion probabilistic models
  • Fig. l is a simplified block diagram illustrating an exemplary model architecture and Fig. 2 is a simplified flow diagram illustrating exemplary steps in a training phase of the present method.
  • X-Ray images 105a, 105b are inputs to the system and volume features 110 are extracted from these images.
  • volume features 110 are extracted from these images.
  • noise is added according to the noise levels y 125 to generate the noisy volumes y 115.
  • the noisy volume is then concatenated with volume features (Fig. 2, step 270) to generate a concatenated 3D volume.
  • a 3D UNet U 120 can then be applied to the concatenated 3D volume to generate a denoised volume 130 given the concatenated volume as well as the noise level 125.
  • a U-Net is a known convolutional neural network that was developed for biomedical image segmentation. The network is based on a fully convolutional neural network whose architecture was adapted to work with fewer training images and to yield more precise segmentation. The U-Net architecture which now underlies many modem image generation models has also been employed in diffusion models for iterative image denoising.
  • X-ray images are provided as input (step 200).
  • two orthogonal X-ray images are used but it will be appreciated that more than two images can be used and in some cases only a single image can be used. It will be appreciated, however, that using a lower number of images as input inherently increases the potential ambiguity in the reconstruction.
  • the pose of the X-ray in the respective X-rays are known and are also input to the system, such as with metadata associated with the X-ray image data.
  • an image encoder E is used to extract image features (step 210).
  • local image features can be backprojected to the corresponding voxels to obtain volume features F, where each voxel is associated with a feature vector (step 220).
  • the method projects the voxel onto the image plane coordinate and bilinearly interpolates the image feature volume to obtain the feature vector.
  • the bilinearly interpolated local image feature can then be used as the voxel feature.
  • the present method can also be applied with a single X-ray as input. Specifically, after bilinearly interpolating the image feature to obtain a feature vector for each voxel from a single X-ray, the MLP is directly applied to all the voxel features without average pooling aggregation.
  • Training of the models used in the present embodiments is further described in connection with the block diagram of Figure 1 and flow diagram of Figure 2.
  • training is performed using a large dataset comprising X-ray image 200 and CT scan 230 pair data.
  • input X-ray images are applied to an image encoder which extracts image features 210 an generates volume features 220, as discussed above.
  • image encoder which extracts image features 210 an generates volume features 220, as discussed above.
  • the process randomly samples the X-ray 200 and CT scan 230 pair (x, y) from the training set and added noises of random noise levels y 240 to the CT scan y following denoising diffusion probabilistic models (DDPMs).
  • DDPMs denoising diffusion probabilistic models
  • the noisy volume y 260 is concatenated 270 with volume features which is applied to a 3D UNet 280.
  • the UNet U 280 is used to denoise the noisy volume, y, given the volume features 220, F and noise level y, to generate the denoised CT volume U (F, y ⁇ , y).
  • the model is trained end-to-end to optimize E, f and U by minimizing the following loss function: [0045] Unlike known GAN-based CT reconstruction work, with three or more terms in the loss function, the loss function of the present embodiments shown in Equation (1) has only one mean square error (MSE) term. The simplicity of this loss function makes training easier and greatly reduces the work of hyperparameter tuning for balancing the different terms in the loss function.
  • MSE mean square error
  • Fig. 3 is a flow diagram illustrating an inference phase of a system in accordance with the present disclosure for generating a 3D volume from a limited number of X-ray images.
  • the trained diffusion model is conditioned with the input images in order to sample the volume and generate a 3D model.
  • one or more X- ray images are input to the system in step 300.
  • Volume features are then extracted with image encoder E and f from the input images in step 310. Then the volume features are concatenated with noise randomly drawn from a 3D Gaussian distribution in step 320.
  • the present methods then sample the volume with the diffusion model conditioned on the concatenation of volume features and 3D Gaussian noise by iterative denoising of the noisy volume in step 330.
  • denoising diffusion implicit models DDIMs
  • DDIMs denoising diffusion implicit models
  • 25 ddim denoising steps have been found suitable as a default setting, but other denoising steps may also be used.
  • the present embodiments can sample a CT volume from the volume features extracted from the input X-ray images in a feed-forward manner.
  • An objective is not only to offer 3D information but also to preserve the original 2D information from input X-rays.
  • a method to iteratively refine the initial reconstructed 3D CT volume by enforcing the consistency between the input X-rays and reprojections can be used which perseveres more information in the original X-rays, step 330.
  • the local image features are projected to the corresponding voxels and obtain the volume features, which are concatenated with a noise source, such as a substantially pure 3D Gaussian noise.
  • the model of the present embodiments generates an initial 3D volume from the concatenated volume.
  • the initial 3D volume is then re-projected with the poses of input X-rays to generate the reprojections, step 340.
  • the process may preferably include a process to fine-tune the model to refine the 3D reconstruction by minimizing the L2 loss between the input X-rays and reprojections, step 350.
  • the initial 3D Gaussian noise and UNet U are fixed and fine- tuning is performed for the image encoder E and MLP f.
  • Fine tuning can help the model find the volume features generating the 3D CT that best matches the input X-rays.
  • Gradient propagation through the sampling process of diffusion models is typically computationally expensive, due to the iterative sampling.
  • Gradient checkpointing can be used to reduce the memory' cost with the cost of increased computation time. While resulting in more computation, this can substantially improve the reconstruction quality.
  • Figs. 4A-4D are graphs illustrating changes in PSNR between input X-rays and reprojections, as well as the changes in PSNR, SSIM, and LPIPS between the refined volume and ground truth during iterative refinement of 3 CT volumes with 150 iterations at 25 DDIM denoising steps.
  • the consistency between input X-rays and reprojections is significantly improved. Also, all three metrics are also substantially improved.
  • Figs. 5 A-5F are a series of images, including side (Fig. 5 A), front (Fig. 5B), and rear 2D views (Fig. 5C), cross sectional views (Figs. 5D, 5F) and a 3D reconstruction of LIDC ground truth data (Fig. 5D).
  • Figs. 6A-6F present the corresponding image views as Figs 5A- 5F showing a qualitative comparison of a reconstruction using the present methods with two orthogonal images used as input data, compared to the ground truth of Figs. 5A-5F.
  • the present reconstruction methods allow the reconstruction of a 3D volume to augment the 2D information. This enables assessment of and visualization of 3D information from a small set of 2D X-rays. With the application of different transfer functions, the 3D shape and position of different organs and bones can be visualized.
  • the present systems and methods provide for an effective evaluation of 3D organ shape.
  • the reconstruction of 3D anatomical shape from a limited number of 2D X-rays has been applied in various medical applications, including visualization of lung motion during respiration, hip replacement planning, and risk assessment of osteoporosis.
  • the present systems and methods can be used to visualize 3D anatomical information such as 3D lung shape and body shape from limited number of 2D X-rays.
  • 3D lung shape as an illustrative example, the present method first segments the lung regions from the CT volumes. Lung masks are obtained by segmenting the lung regions from both the ground truth CT volumes in the test set and the corresponding reconstructed CT volumes.
  • TLV total lung volume
  • PFT pulmonary function test
  • CT-derived TLV is used in various medical conditions, including the assessment of chronic obstructive pulmonary disease (COPD) and restrictive lung disease, as well as in lung volume reduction surgery and lung transplant. How ever, as in other applications, using CT scans to evaluate TLV has practical limitations and challenges, such as radiation exposure and high costs. In contrast, conventional chest X-rays are simpler, faster, more accessible, and expose patients to lower radiation.
  • COPD chronic obstructive pulmonary disease
  • Such software may be a computer program product that employs a machine-readable storage medium.
  • a machine-readable storage medium or computer-readable storage medium may be any medium that is capable of storing and/or encoding a sequence of instructions for execution by a machine (e.g., a computing device) and that causes the machine to perform any one of the methodologies and/or embodiments described herein.
  • Examples of a computing device include, but are not limited to, an electronic book reading device, a computer workstation, a terminal computer, a server computer, a handheld device (e.g., a tablet computer, a smartphone, etc.), a web appliance, a network router, a network switch, a network bridge, any machine capable of executing a sequence of instructions that specify an action to be taken by that machine, and any combinations thereof.
  • a computing device may include and/or be included in a kiosk.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Pulmonology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Des systèmes et des procédés pour générer une image volumétrique 3D à partir de clichés radiographiques 2D comprennent la réception de données représentant au moins un cliché radiographique d'une région d'intérêt en tant que données d'entrée. Des caractéristiques de volume sont extraites du ou des clichés radiographiques et les caractéristiques de volume extraites sont concaténées avec une source de bruit pour générer un volume cible bruyant. Un processus de débruitage itératif est effectué sur le volume cible bruité par échantillonnage du volume avec un modèle de diffusion conditionné entraîné. Le procédé reprojette ensuite le volume 3D de la région d'intérêt. Lorsque les clichés radiographiques 2D comprennent des données de pose, l'opération de reprojection reprojette le volume 3D avec les données de pose.
PCT/US2024/056343 2023-12-11 2024-11-18 Système et procédé de reconstruction de tomodensitométrie tridimensionnelle à partir de rayons x Pending WO2025128284A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363608385P 2023-12-11 2023-12-11
US63/608,385 2023-12-11

Publications (1)

Publication Number Publication Date
WO2025128284A1 true WO2025128284A1 (fr) 2025-06-19

Family

ID=96058285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/056343 Pending WO2025128284A1 (fr) 2023-12-11 2024-11-18 Système et procédé de reconstruction de tomodensitométrie tridimensionnelle à partir de rayons x

Country Status (1)

Country Link
WO (1) WO2025128284A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120020448A1 (en) * 2010-07-22 2012-01-26 Kedar Bhalchandra Khare System and method for reconstruction of x-ray images
US20180161102A1 (en) * 2014-10-30 2018-06-14 Edda Technology, Inc. Method and system for estimating a deflated lung shape for video assisted thoracic surgery in augmented and mixed reality
US20190076101A1 (en) * 2017-09-13 2019-03-14 The University Of Chicago Multiresolution iterative reconstruction for region of interest imaging in x-ray cone-beam computed tomography
US20210012545A1 (en) * 2018-03-28 2021-01-14 Koninklijke Philips N.V. Tomographic x-ray image reconstruction
US20210074036A1 (en) * 2018-03-23 2021-03-11 Memorial Sloan Kettering Cancer Center Deep encoder-decoder models for reconstructing biomedical images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120020448A1 (en) * 2010-07-22 2012-01-26 Kedar Bhalchandra Khare System and method for reconstruction of x-ray images
US20180161102A1 (en) * 2014-10-30 2018-06-14 Edda Technology, Inc. Method and system for estimating a deflated lung shape for video assisted thoracic surgery in augmented and mixed reality
US20190076101A1 (en) * 2017-09-13 2019-03-14 The University Of Chicago Multiresolution iterative reconstruction for region of interest imaging in x-ray cone-beam computed tomography
US20210074036A1 (en) * 2018-03-23 2021-03-11 Memorial Sloan Kettering Cancer Center Deep encoder-decoder models for reconstructing biomedical images
US20210012545A1 (en) * 2018-03-28 2021-01-14 Koninklijke Philips N.V. Tomographic x-ray image reconstruction

Similar Documents

Publication Publication Date Title
Bera et al. Noise conscious training of non local neural network powered by self attentive spectral normalized Markovian patch GAN for low dose CT denoising
CN108898642B (zh) 一种基于卷积神经网络的稀疏角度ct成像方法
EP3123447B1 (fr) Systèmes et procédés de reconstruction et d'amélioration d'images guidées par modèle et par données
CN109785243B (zh) 基于对抗生成网络未配准低剂量ct的去噪方法、计算机
US20230076809A1 (en) Context-aware volumetric style transfer for estimating single volume surrogates of lung function
CN112435164B (zh) 基于多尺度生成对抗网络的低剂量ct肺部图像的同时超分辨率和去噪方法
Wang et al. A review of deep learning CT reconstruction from incomplete projection data
Liu et al. Speckle noise reduction for medical ultrasound images based on cycle-consistent generative adversarial network
WO2014172421A1 (fr) Reconstruction itérative pour une tomodensitométrie utilisant une régularisation non locale provoquée par une image antérieure
CN113516586A (zh) 一种低剂量ct图像超分辨率去噪方法和装置
Huang et al. Joint spine segmentation and noise removal from ultrasound volume projection images with selective feature sharing
CN103027705A (zh) 产生运动补偿的ct 图像数据组的方法和系统
US10013778B2 (en) Tomography apparatus and method of reconstructing tomography image by using the tomography apparatus
EP4292051A1 (fr) Algorithme de réduction d'artéfacts métalliques pour des procédures d'intervention guidées par ct
Kim et al. A methodology to train a convolutional neural network-based low-dose CT denoiser with an accurate image domain noise insertion technique
Sharif et al. Two-stage deep denoising with self-guided noise attention for multimodal medical images
Tong et al. DAGAN: A GAN network for image denoising of medical images using deep learning of residual attention structures
Gunduzalp et al. 3d u-netr: Low dose computed tomography reconstruction via deep learning and 3 dimensional convolutions
Kshirsagar et al. Generative ai-assisted novel view synthesis of coronary arteries for angiography
Li et al. Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging
Longuefosse et al. Lung ct synthesis using gans with conditional normalization on registered ultrashort echo-time mri
WO2025128284A1 (fr) Système et procédé de reconstruction de tomodensitométrie tridimensionnelle à partir de rayons x
Xu et al. Super-resolution 3D reconstruction from low-dose biomedical images based on expertized multi-layer refining
Li et al. A multi-pronged evaluation for image normalization techniques
Cheng et al. Synthesising 3D cardiac CINE-MR images and corresponding segmentation masks using a latent diffusion model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24904613

Country of ref document: EP

Kind code of ref document: A1