[go: up one dir, main page]

CN111436936A - CT image reconstruction method based on MRI - Google Patents

CT image reconstruction method based on MRI Download PDF

Info

Publication number
CN111436936A
CN111436936A CN202010355883.3A CN202010355883A CN111436936A CN 111436936 A CN111436936 A CN 111436936A CN 202010355883 A CN202010355883 A CN 202010355883A CN 111436936 A CN111436936 A CN 111436936A
Authority
CN
China
Prior art keywords
mri
space data
image
offline
undersampled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010355883.3A
Other languages
Chinese (zh)
Other versions
CN111436936B (en
Inventor
张鞠成
孙云
饶先成
孙建忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010355883.3A priority Critical patent/CN111436936B/en
Priority to CN202110770801.6A priority patent/CN113470139B/en
Publication of CN111436936A publication Critical patent/CN111436936A/en
Application granted granted Critical
Publication of CN111436936B publication Critical patent/CN111436936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a CT image reconstruction method based on MRI, which comprises the following steps: 1) reconstructing MRI by using a deep learning network, training the deep learning network, acquiring undersampled k-space data of an object to be detected, and inputting the undersampled k-space data of the object to be detected into the trained deep learning network to acquire on-line MRI of the object to be detected; 2) CT images are reconstructed from MRI using a two-way generating countermeasure network. The invention has the following advantages: (1) MRI imaging speed is high; (2) the application range is wide, and the imaging device can be used for lung imaging and imaging of other parts of a human body; (3) CT images are obtained through MRI reconstruction, and ionizing radiation of CT examination is avoided; (4) the reconstructed CT images can also be used for radiation therapy planning, and PET attenuation correction.

Description

基于MRI的CT图像重建方法MRI-based CT image reconstruction method

技术领域technical field

本发明涉及医学图像处理技术领域,具体地说是一种基于MRI的CT图像重建方法。The invention relates to the technical field of medical image processing, in particular to a CT image reconstruction method based on MRI.

背景技术Background technique

新冠肺炎传染性强、致死率高,早发现、早诊断、早治疗、早隔离是目前防控治疗的最有效手段。相比核酸检查的种种受限,CT(computed tomography)检查及时、准确、快捷、阳性率高、肺部病变范围与临床症状密切相关,因此成为新型冠状病毒肺炎患者早期筛查与诊断的主要参考依据。根据新型冠状病毒肺炎诊疗方案(试行第六版),新冠肺炎早期呈现多发小斑片影及间质改变,以肺外带明显。进而发展为双肺多发磨玻璃影、浸润影,严重者可出现肺实变,胸腔积液少见。患者从入院的CT扫描初评、到了解病变进展、直至治愈出院,少则2次CT检查,多则3~4次CT检查。由于存在电离辐射,儿童和孕妇等人群不适合做CT检查。磁共振成像(magnetic resonance imaging,MRI)具有软组织对比度高、无电离辐射、高分辨率和任意方向断层扫描等优点,是现代医学成像中的一项重要技术。MRI通常作为胸部平片和CT的重要补充,对于鉴别胸内外病变、纵膈内外病变,膈上下病变,了解病变的起源有很大帮助。对于新冠肺炎的影像学检查,与CT相比,MRI的缺陷主要在于成像速度慢,对肺部细微结构的显示差。New coronary pneumonia is highly infectious and has a high fatality rate. Early detection, early diagnosis, early treatment, and early isolation are currently the most effective means of prevention and treatment. Compared with the limitations of nucleic acid examination, CT (computed tomography) examination is timely, accurate, fast, and has a high positive rate. The extent of lung lesions is closely related to clinical symptoms, so it has become the main reference for early screening and diagnosis of patients with new coronavirus pneumonia. in accordance with. According to the new coronavirus pneumonia diagnosis and treatment plan (trial version 6), multiple small patchy shadows and interstitial changes appeared in the early stage of new coronary pneumonia, especially in the extrapulmonary zone. It then develops into multiple ground-glass opacities and infiltration shadows in both lungs. In severe cases, pulmonary consolidation may occur, and pleural effusion is rare. From the initial evaluation of the CT scan at admission, to the understanding of the progression of the disease, to the cure and discharge, the patient had at least 2 CT examinations, and as many as 3 to 4 CT examinations. Due to the presence of ionizing radiation, people such as children and pregnant women are not suitable for CT examinations. Magnetic resonance imaging (MRI) has the advantages of high soft tissue contrast, no ionizing radiation, high resolution and arbitrary orientation tomography, and is an important technology in modern medical imaging. MRI is usually used as an important supplement to chest plain film and CT. It is of great help in identifying internal and external thoracic lesions, internal and external mediastinal lesions, and upper and lower diaphragmatic lesions, and to understand the origin of lesions. For the imaging examination of new coronary pneumonia, compared with CT, the main disadvantage of MRI lies in the slow imaging speed and poor display of the fine structures of the lungs.

发明内容SUMMARY OF THE INVENTION

有鉴于此,本发明针对上述MRI成像速度慢,对肺部细微结构的显示差的问题,提供了一种成像速度快、对肺部细微结构显示效果好的基于MRI的CT图像重建方法。In view of this, the present invention provides an MRI-based CT image reconstruction method with fast imaging speed and good display effect on lung fine structures, aiming at the above-mentioned problems of slow MRI imaging speed and poor display of lung fine structures.

本发明的技术解决方案是,提供一种以下步骤的基于MRI的CT图像重建方法,包括以下步骤:The technical solution of the present invention is to provide an MRI-based CT image reconstruction method with the following steps, including the following steps:

1)利用深度学习网络重建MRI,包括以下步骤:1) Using deep learning network to reconstruct MRI, including the following steps:

获取样本物体的全采样的线下k空间数据,所述全采样是指k空间数据采集满足奈奎斯特采样定理,可以通过全采样k空间数据恢复样本物体的图像,所述线下k空间数据是指从磁共振设备获取的k空间数据;Obtain the fully sampled offline k-space data of the sample object. The full sampling means that the k-space data collection satisfies the Nyquist sampling theorem, and the image of the sample object can be restored through the fully sampled k-space data. The offline k-space Data refers to k-space data obtained from magnetic resonance equipment;

对所述全采样的线下k空间数据进行逆傅里叶变换得到全采样的线下多对比度MRI;所述多对比度MRI是指用多种成像序列进行扫描,得到不同的对比度,如T1W、T2W等;Inverse Fourier transform is performed on the fully sampled offline k-space data to obtain a fully sampled offline multi-contrast MRI; the multi-contrast MRI refers to scanning with a variety of imaging sequences to obtain different contrasts, such as T1W, T2W, etc.;

在k空间对所述全采样的线下k空间数据进行欠采样,以获取欠采样的线下k空间数据,所述欠采样是指k空间数据采集不满足奈奎斯特采样定理,直接用来进行图像重建时会产生混叠伪影;Undersampling the fully sampled offline k-space data in k-space to obtain under-sampled offline k-space data. The undersampling means that the k-space data collection does not satisfy the Nyquist sampling theorem. Aliasing artifacts will be generated when image reconstruction is performed;

根据所述欠采样的线下k空间数据和所述全采样的线下多对比度MRI,训练深度学习网络;training a deep learning network according to the undersampled offline k-space data and the fully sampled offline multi-contrast MRI;

获取待测物体的欠采样k空间数据;Obtain the undersampled k-space data of the object to be measured;

将所述待测物体的欠采样k空间数据输入至训练好的深度学习网络,以获取所述待测物体的线上MRI;Input the under-sampled k-space data of the object to be measured into the trained deep learning network to obtain the online MRI of the object to be measured;

2)利用双向生成对抗网络,由线上MRI重建CT图像;所述双向生成对抗网络由两个生成器和两个判别器构成,第一生成器GA为由线上MRI映射到CT图像,第二生成器GB为由CT图像映射到线上MRI,所述判别器包括CT判别器和MRI判别器,所述CT判别器DCT用于区分由第一生成器GA生成的CT图像和真实CT图像,MRI判别器DMRI用于区分由第二生成器GB生成的MRI和真实MRI;利用双向生成对抗网络由MRI重建CT图像包括以下步骤:2) using a bidirectional generative adversarial network to reconstruct a CT image from an online MRI; the bidirectional generative adversarial network is composed of two generators and two discriminators, and the first generator G A is mapped to the CT image by the online MRI, The second generator GB maps the CT images to the online MRI, the discriminator includes a CT discriminator and an MRI discriminator, and the CT discriminator D CT is used to distinguish the CT images generated by the first generator GA and real CT images, the MRI discriminator D MRI is used to distinguish the MRI generated by the second generator G B from the real MRI; the reconstruction of CT images from MRI using a bidirectional generative adversarial network includes the following steps:

分别获取未标记未配对的MRI和CT图像;Obtain unlabeled and unpaired MRI and CT images, respectively;

真实MRI通过生成器GA转换为生成CT图像GA(IMRI);The real MRI is converted into the generated CT image GA ( I MRI ) by the generator GA;

生成CT图像GA(IMRI)再通过生成器GB转换为重建MRI;Generate CT image GA ( I MRI ) and then convert it into reconstructed MRI through generator GB ;

真实CT图像ICT通过生成器GB转换为生成MRI;The real CT image I CT is converted into the generated MRI by the generator GB ;

生成MRI再通过生成器GA转换为重建CT图像GA(GB(ICT));The generated MRI is then converted into a reconstructed CT image GA ( GB ( I CT )) by the generator GA;

第一生成器GA和第二生成器GB组成的生成器网络与CT判别器和MRI判别器组成判别器网络相互对抗、不断调整参数,最终优化使得判别网络无法判断生成网络的输出结果是否真实,同时最小化重建损失||GB(GA(IMRI))-IMRI||和||GA(GB(ICT))-ICT||。The generator network composed of the first generator G A and the second generator G B and the CT discriminator and the MRI discriminator form the discriminator network to confront each other, and constantly adjust the parameters. The final optimization makes the discriminant network unable to judge whether the output result of the generation network is not. True while minimizing the reconstruction loss ||G B (G A (I MRI ))-I MRI || and ||G A (G B (I CT ))-I CT ||.

采用以上方法,本发明与现有技术相比,具有以下优点:(1)成像速度快;(2)应用范围广,可以用于肺部成像,也可以用于人体其他部位成像;(3)由MRI重建得到CT图像,避免了CT检查的电离辐射;(4)重建得到的CT图像还可以用于放射治疗计划制定,以及PET(positron emission tomography)衰减校正。By adopting the above method, the present invention has the following advantages compared with the prior art: (1) the imaging speed is fast; (2) the application range is wide, which can be used for lung imaging and imaging of other parts of the human body; (3) CT images obtained from MRI reconstruction avoid the ionizing radiation of CT examination; (4) CT images obtained by reconstruction can also be used for radiation therapy planning and PET (positron emission tomography) attenuation correction.

作为改进,在步骤2)中,所述双向生成对抗网络为Wasserstein双向生成对抗网络,用Wasserstein距离代替双向生成对抗网络中的Jensen–Shannon散度,损失函数为:λ1||GB(GA(IMRI))-IMRI||+λ2||GA(GB(ICT))-ICT||-DMRI(GB(ICT))-DCT(GA(IMRI)),其中λ1和λ2为正则化参数,可以根据经验选择。As an improvement, in step 2), the bidirectional generative adversarial network is a Wasserstein bidirectional generative adversarial network, and the Jensen–Shannon divergence in the bidirectional generative adversarial network is replaced by the Wasserstein distance, and the loss function is: λ 1 ||G B (G A (I MRI ))-I MRI ||+λ 2 ||G A (G B (I CT ))-I CT ||-D MRI (G B (I CT ))-D CT (G A (I CT ) MRI )), where λ 1 and λ 2 are regularization parameters that can be chosen empirically.

作为改进,在步骤2)中,在损失函数中增加感知损失,用预训练的VGG16网络作为特征提取器,损失函数为:As an improvement, in step 2), the perceptual loss is added to the loss function, and the pre-trained VGG16 network is used as the feature extractor, and the loss function is:

Figure BDA0002473419480000031
其中λ1、λ2、λ3和λ4为正则化参数,可以根据经验选择。所述VGG16网络是图片分类任务中经典的深度学习模型,VGG是由Simonyan和Zisserman在文献《VeryDeepConvolutional Networks forLarge Scale Image Recognition》中提出的卷积神经网络模型,其名称来源于论文作者所在的牛津大学视觉几何组(Visual Geometry Group)的缩写,该模型参加2014年的ImageNet图像分类与定位挑战赛,取得了优异成绩:在分类任务上排名第二,在定位任务上排名第一。
Figure BDA0002473419480000031
Among them, λ 1 , λ 2 , λ 3 and λ 4 are regularization parameters, which can be selected according to experience. The VGG16 network is a classic deep learning model in image classification tasks. VGG is a convolutional neural network model proposed by Simonyan and Zisserman in the document "VeryDeepConvolutional Networks for Large Scale Image Recognition", whose name comes from the Oxford University where the author of the paper is located. Abbreviation for Visual Geometry Group, this model participated in the ImageNet Image Classification and Localization Challenge in 2014 and achieved excellent results: it ranked second in the classification task and first in the localization task.

作为改进,所述步骤1)中利用深度学习网络重建MRI包括:As an improvement, in the step 1), the use of deep learning network to reconstruct MRI includes:

获取样本物体的全采样的线下k空间数据y0Obtain the fully sampled offline k-space data y 0 of the sample object;

对所述全采样的线下k空间数据y0进行逆傅里叶变换得到全采样的线下多对比度磁共振图像x0performing inverse Fourier transform on the fully sampled offline k-space data y 0 to obtain a fully sampled offline multi-contrast magnetic resonance image x 0 ;

在k空间对所述全采样的线下k空间数据y0进行欠采样,以获取欠采样的线下k空间数据y1Undersampling the fully sampled offline k-space data y 0 in k-space to obtain under-sampled offline k-space data y 1 ;

对所述欠采样的线下k空间数据y1进行高通滤波得到y1*h;Perform high-pass filtering on the undersampled offline k-space data y 1 to obtain y 1 *h;

根据所述高通滤波后的欠采样的线下k空间数据y1*h和所述全采样的线下多对比度磁共振图像x0,训练深度学习网络;Train a deep learning network according to the high-pass filtered undersampled offline k-space data y 1 *h and the fully sampled offline multi-contrast magnetic resonance image x 0 ;

获取待测物体的欠采样k空间数据y2Obtain undersampled k-space data y 2 of the object to be measured;

对所述欠采样的k空间数据y2进行高通滤波得到y2*h;Perform high-pass filtering on the undersampled k-space data y 2 to obtain y 2 *h;

将所述待测物体的高通滤波后的欠采样k空间数据y2*h输入至训练好的深度学习网络,得到k空间填充后的k空间数据y2’*h;Input the high-pass filtered undersampled k-space data y 2 *h of the object to be tested into the trained deep learning network, and obtain k-space filled k-space data y 2 '*h;

对所述进行逆高通滤波得到重建k空间数据y2’;Performing inverse high-pass filtering on the reconstructed k-space data y 2 ′;

对所述重建k空间数据y2’进行逆傅里叶变换得到线上磁共振图像。Inverse Fourier transform is performed on the reconstructed k-space data y 2 ′ to obtain an online magnetic resonance image.

作为改进,所述步骤1)中利用深度学习网络重建MRI包括:As an improvement, in the step 1), the use of deep learning network to reconstruct MRI includes:

获取样本物体的全采样的多通道线下k空间数据y0Obtain the fully sampled multi-channel offline k-space data y 0 of the sample object;

对所述全采样的多通道线下k空间数据y0进行逆傅里叶变换得到全采样的多通道线下多对比度磁共振图像x0performing inverse Fourier transform on the fully sampled multi-channel offline k-space data y 0 to obtain a fully sampled multi-channel offline multi-contrast magnetic resonance image x 0 ;

在k空间对所述全采样的多通道线下k空间数据y0进行欠采样,以获取欠采样的多通道线下k空间数据y1Under-sampling the fully-sampled multi-channel offline k-space data y 0 in k-space to obtain under-sampled multi-channel offline k-space data y 1 ;

对所述欠采样的多通道线下k空间数据y1进行高通滤波得到y1*h;Perform high-pass filtering on the undersampled multi-channel offline k-space data y 1 to obtain y 1 *h;

根据所述高通滤波后的欠采样的多通道线下k空间数据y1*h和所述全采样的多通道线下多对比度磁共振图像x0,训练深度学习网络,如图3所示,分别在并行成像前后设置深度学习网络;According to the high-pass filtered undersampled multi-channel offline k-space data y 1 *h and the fully sampled multi-channel offline multi-contrast magnetic resonance image x 0 , a deep learning network is trained, as shown in FIG. 3 , Set up the deep learning network before and after parallel imaging respectively;

经过两个深度学习网络和并行成像处理后的k空间数据为y1’*h,对y1’*h进行逆高通滤波和数据一致性校正,通过将采样得到的k空间数据替换相应位置的k空间数据,保证深度学习网络仅填充未采样的k空间数据;After two deep learning networks and parallel imaging processing, the k-space data is y 1 '*h, inverse high-pass filtering and data consistency correction are performed on y 1 '*h, and the k-space data obtained by sampling is replaced by the corresponding position. k-space data to ensure that the deep learning network only fills unsampled k-space data;

对数据一致性校正后的k空间数据进行逆傅里叶变换和均方根操作得到最终线下重建磁共振图像;Perform inverse Fourier transform and root mean square operation on the k-space data after data consistency correction to obtain the final offline reconstructed magnetic resonance image;

获取待测物体的多通道欠采样k空间数据y2Obtain multi-channel undersampled k-space data y 2 of the object to be measured;

对所述多通道欠采样的k空间数据y2进行高通滤波得到y2*h;Perform high-pass filtering on the multi-channel undersampled k-space data y 2 to obtain y 2 *h;

将所述待测物体的高通滤波后的多通道欠采样k空间数据y2*h输入至训练好的深度学习网络,得到k空间填充后的k空间数据y2’*h;Input the high-pass filtered multi-channel undersampled k-space data y 2 *h of the object to be tested into the trained deep learning network to obtain k-space filled k-space data y 2 '*h;

对所述进行逆高通滤波得到重建k空间数据y2’;Performing inverse high-pass filtering on the reconstructed k-space data y 2 ′;

对所述重建k空间数据y2’进行逆傅里叶变换和均方根操作得到线上磁共振图像。Inverse Fourier transform and root mean square operation are performed on the reconstructed k-space data y 2 ′ to obtain an online magnetic resonance image.

作为改进,所述并行成像方法为GRAPPA或SPIRiT中的一种。As an improvement, the parallel imaging method is one of GRAPPA or SPIRiT.

作为改进,所述步骤1)中的深度学习网络由k空间域U-Net和图像域U-Net构成,欠采样的线下k空间数据首先输入k空间域U-Net,之后进行数据一致性校正,经过逆傅里叶变换得到磁共振图像,输入图像域U-Net,之后进行傅里叶变换得到k空间数据,并进行数据一致性校正;所述数据一致性校正通过将采样得到的k空间数据替换相应位置的k空间数据,保证深度学习网络仅填充未采样的k空间数据。As an improvement, the deep learning network in the step 1) is composed of the k-space domain U-Net and the image domain U-Net, and the undersampled offline k-space data is first input into the k-space domain U-Net, and then the data consistency is carried out. Correction, obtain the magnetic resonance image through inverse Fourier transform, input the image domain U-Net, and then perform Fourier transform to obtain k-space data, and perform data consistency correction; the data consistency correction is obtained by sampling k The spatial data replaces the k-space data at the corresponding location, ensuring that the deep learning network is only populated with unsampled k-space data.

附图说明Description of drawings

图1为本发明利用深度学习网络重建MRI的流程图;Fig. 1 is the flow chart of the present invention utilizing deep learning network to reconstruct MRI;

图2为本发明利用生成对抗网络由MRI重建CT图像的原理图;2 is a schematic diagram of the present invention using a generative adversarial network to reconstruct CT images from MRI;

图3为本发明实施例一中利用深度学习网络重建MRI的流程图;3 is a flowchart of reconstructing MRI using a deep learning network in Embodiment 1 of the present invention;

图4为本发明实施例一中深度学习网络的构成图。FIG. 4 is a structural diagram of a deep learning network in Embodiment 1 of the present invention.

1-MRI空间,2-CT空间,31-由MRI生成CT的生成器GA,32-由CT生成MRI的生成器GB,41-MRI判别器,42-CT判别器,11-真实MRI图像,12-生成CT图像,13-重建MRI图像;21-真实CT图像,22-生成MRI图像,23-重建CT图像。1-MRI space, 2-CT space, 31-Generator G A that generates CT from MRI, 32-Generator G B that generates MRI from CT, 41- MRI discriminator, 42- CT discriminator, 11- Real MRI image, 12-generated CT image, 13-reconstructed MRI image; 21-real CT image, 22-generated MRI image, 23-reconstructed CT image.

具体实施方式Detailed ways

下面结合附图和具体实施例对本发明作进一步说明,但本发明并不仅仅限于这些实施例。本发明涵盖任何在本发明的精髓和范围上做的替代、修改、等效方法以及方案。为了使公众对本发明有彻底的了解,在以下本发明优选实施例中详细说明了具体的细节,而对本领域技术人员来说没有这些细节的描述也可以完全理解本发明。The present invention will be further described below with reference to the accompanying drawings and specific embodiments, but the present invention is not limited to these embodiments. The present invention covers any alternatives, modifications, equivalent methods and arrangements made within the spirit and scope of the present invention. In order to give the public a thorough understanding of the present invention, specific details are described in detail in the following preferred embodiments of the present invention, and those skilled in the art can fully understand the present invention without the description of these details.

与传统的MRI图像重建技术相比,基于深度学习的图像重建方法在缩短磁共振成像扫描时间,加快成像速度,提高成像质量方面具有巨大潜力。图1为本发明利用深度学习网络重建MRI的流程图。在线下训练过程中,利用欠采样的线下k空间数据和全采样的线下多对比度磁共振图像训练深度学习网络,由深度学习网络获得的重建k空间数据经过逆傅里叶变换得到重建磁共振图像。在线上测试过程中,向深度学习网络输入待测物体的欠采样k空间数据,输出重建k空间数据,经过逆傅里叶变换得到重建磁共振图像。k空间是直角坐标体空间的傅里叶对偶空间,即傅里叶变换的频率空间,主要应用在磁共振成像领域。Compared with traditional MRI image reconstruction techniques, deep learning-based image reconstruction methods have great potential in shortening MRI scan time, accelerating imaging speed, and improving imaging quality. FIG. 1 is a flow chart of reconstructing MRI using a deep learning network according to the present invention. In the offline training process, the undersampled offline k-space data and fully sampled offline multi-contrast magnetic resonance images are used to train the deep learning network, and the reconstructed k-space data obtained by the deep learning network is subjected to inverse Fourier transform to obtain the reconstructed magnetic Resonance image. In the online testing process, the under-sampled k-space data of the object to be tested is input to the deep learning network, the reconstructed k-space data is output, and the reconstructed magnetic resonance image is obtained through inverse Fourier transform. The k-space is the Fourier-dual space of the rectangular coordinate space, that is, the frequency space of the Fourier transform, and is mainly used in the field of magnetic resonance imaging.

可选的,所述多对比度图像包括T1加权图像、T2加权图像和质子密度图像,所述多对比度图像的视野和矩阵尺寸相同。其中,所述T1加权图像主要突出所述样本物体中组织的纵向弛豫差别,尽量减少组织其他特性如横向弛豫等对图像的影响。所述T2加权图像主要突出所述样本物体中组织的横向弛豫差别。所述质子密度图像主要反映所述样本物体中组织的质子含量差别。Optionally, the multi-contrast image includes a T1-weighted image, a T2-weighted image, and a proton density image, and the multi-contrast images have the same field of view and matrix size. Wherein, the T1-weighted image mainly highlights the longitudinal relaxation difference of the tissue in the sample object, and minimizes the influence of other characteristics of the tissue, such as lateral relaxation, on the image. The T2-weighted image primarily highlights lateral relaxation differences of tissue in the sample object. The proton density image mainly reflects the proton content difference of the tissue in the sample object.

图2为本发明利用生成对抗网络由MRI重建CT图像的原理图。1-MRI空间,2-CT空间,31-由MRI生成CT的生成器GA,32-由CT生成MRI的生成器GB,41-MRI判别器,42-CT判别器,11-真实MRI图像,12-生成CT图像,13-重建MRI图像;21-真实CT图像,22-生成MRI图像,23-重建CT图像。FIG. 2 is a schematic diagram of reconstructing CT images from MRI using a generative adversarial network according to the present invention. 1-MRI space, 2-CT space, 31-Generator G A that generates CT from MRI, 32-Generator G B that generates MRI from CT, 41- MRI discriminator, 42- CT discriminator, 11- Real MRI image, 12-generated CT image, 13-reconstructed MRI image; 21-real CT image, 22-generated MRI image, 23-reconstructed CT image.

根据一个实施例,分别获取未标记未配对的MRI和CT图像;According to one embodiment, unlabeled unpaired MRI and CT images are acquired separately;

真实MRI图像IMRI通过生成器GA转换为生成CT图像GA(IMRI);The real MRI image I MRI is converted into a generated CT image GA ( I MRI ) by the generator GA;

生成CT图像GA(IMRI)再通过生成器GB转换为重建MRI图像GB(GA(IMRI));Generate a CT image GA (I MRI ) and then convert it into a reconstructed MRI image GB (GA (I MRI ) ) through the generator GB ;

类似地,真实CT图像ICT通过生成器GB转换为生成MRI图像GB(ICT);Similarly, the real CT image I CT is converted into a generated MRI image GB (I CT ) by the generator GB ;

生成MRI图像GB(ICT)再通过生成器GA转换为重建CT图像GA(GB(ICT)); The MRI image GB ( ICT ) is generated and then converted into a reconstructed CT image GA ( GB (ICT)) by the generator GA;

生成器网络和判别器网络相互对抗、不断调整参数,最终优化使得判别网络无法判断生成网络的输出结果是否真实。The generator network and the discriminator network confront each other and continuously adjust the parameters, and the final optimization makes the discriminant network unable to judge whether the output result of the generation network is real.

线上测试时,输入MRI图像,通过生成器GA得到相应的CT图像。During the online test, the MRI image is input, and the corresponding CT image is obtained through the generator GA .

根据一个实施例,线上测试时,输入输入待测物体的欠采样k空间数据,经过磁共振图像重建深度学习网络得到重建磁共振图像,将重建得到的磁共振图像输入生成对抗网络,得到最终的CT图像。According to one embodiment, during the online test, the under-sampled k-space data of the object to be tested is input, the reconstructed magnetic resonance image is obtained through a deep learning network for reconstructing the magnetic resonance image, and the reconstructed magnetic resonance image is input into the generating adversarial network to obtain the final CT images.

根据一个实施例,CT图像重建方法包括以下步骤:According to one embodiment, the CT image reconstruction method includes the following steps:

获取样本物体的全采样的线下k空间数据y0Obtain the fully sampled offline k-space data y 0 of the sample object;

对所述全采样的线下k空间数据y0进行逆傅里叶变换得到全采样的线下多对比度磁共振图像x0performing inverse Fourier transform on the fully sampled offline k-space data y 0 to obtain a fully sampled offline multi-contrast magnetic resonance image x 0 ;

在k空间对所述全采样的线下k空间数据y0进行欠采样,以获取欠采样的线下k空间数据y1Undersampling the fully sampled offline k-space data y 0 in k-space to obtain under-sampled offline k-space data y 1 ;

对所述欠采样的线下k空间数据y1进行高通滤波得到y1*h;Perform high-pass filtering on the undersampled offline k-space data y 1 to obtain y 1 *h;

根据所述高通滤波后的欠采样的线下k空间数据y1*h和所述全采样的线下多对比度磁共振图像x0,训练深度学习网络;Train a deep learning network according to the high-pass filtered undersampled offline k-space data y 1 *h and the fully sampled offline multi-contrast magnetic resonance image x 0 ;

获取待测物体的欠采样k空间数据y2Obtain undersampled k-space data y 2 of the object to be measured;

对所述欠采样的k空间数据y2进行高通滤波得到y2*h;Perform high-pass filtering on the undersampled k-space data y 2 to obtain y 2 *h;

将所述待测物体的高通滤波后的欠采样k空间数据y2*h输入至训练好的深度学习网络,得到k空间填充后的k空间数据y2’*h;Input the high-pass filtered undersampled k-space data y 2 *h of the object to be tested into the trained deep learning network, and obtain k-space filled k-space data y 2 '*h;

对所述进行逆高通滤波得到重建k空间数据y2’;Performing inverse high-pass filtering on the reconstructed k-space data y 2 ′;

对所述重建k空间数据y2’进行逆傅里叶变换得到线上磁共振图像。Inverse Fourier transform is performed on the reconstructed k-space data y 2 ′ to obtain an online magnetic resonance image.

将所述线上磁共振图像输入生成对抗网络,得到最终CT图像。The online magnetic resonance image is input into a generative adversarial network to obtain a final CT image.

根据一个实施例,针对多通道磁共振成像,结合并行成像和深度学习的MRI成像步骤包括:According to one embodiment, for multi-channel magnetic resonance imaging, the MRI imaging step combining parallel imaging and deep learning includes:

获取样本物体的全采样的多通道线下k空间数据y0Obtain the fully sampled multi-channel offline k-space data y 0 of the sample object;

对所述全采样的多通道线下k空间数据y0进行逆傅里叶变换得到全采样的多通道线下多对比度磁共振图像x0performing inverse Fourier transform on the fully sampled multi-channel offline k-space data y 0 to obtain a fully sampled multi-channel offline multi-contrast magnetic resonance image x 0 ;

在k空间对所述全采样的多通道线下k空间数据y0进行欠采样,以获取欠采样的多通道线下k空间数据y1Under-sampling the fully-sampled multi-channel offline k-space data y 0 in k-space to obtain under-sampled multi-channel offline k-space data y 1 ;

对所述欠采样的多通道线下k空间数据y1进行高通滤波得到y1*h;Perform high-pass filtering on the undersampled multi-channel offline k-space data y 1 to obtain y 1 *h;

根据所述高通滤波后的欠采样的多通道线下k空间数据y1*h和所述全采样的多通道线下多对比度磁共振图像x0,训练深度学习网络,如图3所示,分别在并行成像前后设置深度学习网络;According to the high-pass filtered undersampled multi-channel offline k-space data y 1 *h and the fully sampled multi-channel offline multi-contrast magnetic resonance image x 0 , a deep learning network is trained, as shown in FIG. 3 , Set up the deep learning network before and after parallel imaging respectively;

经过两个深度学习网络和并行成像处理后的k空间数据为y1’*h,对y1’*h进行逆高通滤波和数据一致性校正,通过将采样得到的k空间数据替换相应位置的k空间数据,保证深度学习网络仅填充未采样的k空间数据;After two deep learning networks and parallel imaging processing, the k-space data is y 1 '*h, inverse high-pass filtering and data consistency correction are performed on y 1 '*h, and the k-space data obtained by sampling is replaced by the corresponding position. k-space data to ensure that the deep learning network only fills unsampled k-space data;

对数据一致性校正后的k空间数据进行逆傅里叶变换和均方根操作得到最终线下重建磁共振图像;Perform inverse Fourier transform and root mean square operation on the k-space data after data consistency correction to obtain the final offline reconstructed magnetic resonance image;

获取待测物体的多通道欠采样k空间数据y2Obtain multi-channel undersampled k-space data y 2 of the object to be measured;

对所述多通道欠采样的k空间数据y2进行高通滤波得到y2*h;Perform high-pass filtering on the multi-channel undersampled k-space data y 2 to obtain y 2 *h;

将所述待测物体的高通滤波后的多通道欠采样k空间数据y2*h输入至训练好的深度学习网络,得到k空间填充后的k空间数据y2’*h;Input the high-pass filtered multi-channel undersampled k-space data y 2 *h of the object to be tested into the trained deep learning network to obtain k-space filled k-space data y 2 '*h;

对所述进行逆高通滤波得到重建k空间数据y2’;Performing inverse high-pass filtering on the reconstructed k-space data y 2 ′;

对所述重建k空间数据y2’进行逆傅里叶变换和均方根操作得到线上磁共振图像。Inverse Fourier transform and root mean square operation are performed on the reconstructed k-space data y 2 ′ to obtain an online magnetic resonance image.

将所述线上磁共振图像输入生成对抗网络,得到最终CT图像。The online magnetic resonance image is input into a generative adversarial network to obtain a final CT image.

根据一个实施例,如图4所示,深度学习网络包括k空间域U-Net和图像域U-Net,欠采样的线下k空间数据首先输入k空间域U-Net,之后进行数据一致性校正,经过逆傅里叶变换得到磁共振图像,输入图像域U-Net,之后进行傅里叶变换得到k空间数据,并进行数据一致性校正;所述数据一致性校正通过将采样得到的k空间数据替换相应位置的k空间数据,保证深度学习网络仅填充未采样的k空间数据。According to one embodiment, as shown in Figure 4, the deep learning network includes a k-space domain U-Net and an image domain U-Net, and the undersampled offline k-space data is first input into the k-space domain U-Net, and then data consistency is performed Correction, obtain the magnetic resonance image through inverse Fourier transform, input the image domain U-Net, and then perform Fourier transform to obtain k-space data, and perform data consistency correction; the data consistency correction is obtained by sampling k The spatial data replaces the k-space data at the corresponding location, ensuring that the deep learning network is only populated with unsampled k-space data.

基于上述重建方法可以形成基于MRI的CT图像重建系统。Based on the above reconstruction method, an MRI-based CT image reconstruction system can be formed.

以上仅就本发明较佳的实施例作了说明,但不能理解为是对权利要求的限制。本发明不仅局限于以上实施例,其具体结构允许有变化。总之,凡在本发明独立权利要求的保护范围内所作的各种变化均在本发明的保护范围内。The above only describes the preferred embodiments of the present invention, but should not be construed as limiting the claims. The present invention is not limited to the above embodiments, and the specific structure thereof can be changed. In a word, all changes made within the protection scope of the independent claims of the present invention are all within the protection scope of the present invention.

Claims (7)

1. An MRI-based CT image reconstruction method, comprising the steps of:
1) reconstructing an MRI using a deep learning network, comprising the steps of:
acquiring fully sampled offline k-space data of a sample object, wherein the fully sampling means that the k-space data acquisition satisfies the Nyquist sampling theorem, and an image of the sample object can be restored by the fully sampled k-space data, and the offline k-space data means k-space data acquired from a magnetic resonance device;
performing inverse Fourier transform on the fully sampled offline k-space data to obtain fully sampled offline multi-contrast MRI, wherein the multi-contrast MRI refers to scanning by using multiple imaging sequences to obtain different contrasts;
undersampling the fully sampled offline k-space data in k-space to obtain undersampled offline k-space data, wherein undersampled means that the k-space data acquisition does not meet the Nyquist sampling theorem and aliasing artifacts are generated when the undersampled k-space data are directly used for image reconstruction;
training a deep learning network according to the undersampled offline k-space data and the fully sampled offline multi-contrast MRI;
acquiring undersampled k-space data of an object to be detected;
inputting the undersampled k-space data of the object to be detected into a trained deep learning network to obtain an on-line MRI of the object to be detected;
2) reconstructing a CT image from an on-line MRI using a two-way generative countermeasure network; the bidirectional generation countermeasure network is composed of two generators and two discriminators, the first generator GAFor mapping from an on-line MRI to a CT image, a second generator GBFor mapping from a CT image to an on-line MRI, the discriminators include a CT discriminator and an MRI discriminatorCTFor distinguishing by the first generator GAGenerated CT image and real CT image, MRI discriminator DMRIFor distinguishing by the second generator GBGenerated MRI and real MRI; reconstructing a CT image from MRI using a bi-directional generation countermeasure network includes the steps of:
respectively acquiring unmarked and unpaired MRI and CT images;
true MRI pass generator GAConversion to generate CT image GA(IMRI);
Generating a CT image GA(IMRI) Re-pass generator GBConversion to reconstructed MRI;
true CT image ICTThrough generator GBConversion to generating an MRI;
generating MRI and then passing through generator GAConversion into reconstructed CT image GA(GB(ICT));
First generator GAAnd a second generator GBThe formed generator network, the CT discriminator and the MRI discriminator form the discriminator network to resist each other and continuously adjust parameters, finally, the discrimination network can not judge whether the output result of the generator network is real or not through optimization, and meanwhile, the reconstruction loss G is minimizedB(GA(IMRI))-IMRII and GA(GB(ICT))-ICT||。
2. The MRI-based CT image reconstruction method of claim 1, wherein: in step 2), the bidirectional generation countermeasure network is a Wasserstein bidirectional generation countermeasure network, Wasserstein distance is used for replacing Jensen-Shannon divergence in the bidirectional generation countermeasure network, and the loss function is as follows:
λ1||GB(GA(IMRI))-IMRI||+λ2||GA(GB(ICT))-ICT||-DMRI(GB(ICT))-DCT(GA(IMRI) Where λ) is1And λ2Is a regularization parameter.
3. The MRI-based CT image reconstruction method according to claim 2, wherein: in step 2), adding the perceptual loss in a loss function, and using a pre-trained VGG16 network as a feature extractor, wherein the loss function is as follows:
Figure FDA0002473419470000021
wherein λ1、λ2、λ3And λ4Is a regularization parameter.
4. The MRI-based CT image reconstruction method of claim 1, wherein: the step 1) of reconstructing MRI by using the deep learning network comprises the following steps:
acquiring fully sampled off-line k-space data y of a sample object0
For the fully sampled offline k-space data y0Performing inverse Fourier transform to obtain fully sampled offline multi-contrast magnetic resonance image x0
K-space data y under a line of k-space for said full sampling0Undersampling to obtain undersampled offline k-space data y1
For the undersampled offline k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled offline k-space data y1H and the fully sampled under-line multi-contrast magnetic resonance image x0Training a deep learning network;
obtaining undersampled k-space data y of an object to be measured2
For the undersampled k-space data y2High-pass filtering to obtain y2*h;
The undersampled k-space data y after the high-pass filtering of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform to obtain an in-line magnetic resonance image.
5. The MRI-based CT image reconstruction method according to claim 1 or 4, wherein the reconstructing MRI using the deep learning network in the step 1) comprises:
acquiring fully sampled multi-channel offline k-space data y of a sample object0
For the fully sampled multi-channel offline k-space data y0Performing inverse Fourier transform to obtain fully sampled multi-channel offline multi-contrast magnetic resonance image x0
K-space data y under k-space versus the fully sampled multi-channel line0Undersampling to obtain undersampled multi-channel offline k-space data y1
For the undersampled multi-channel offline k-space data y1High-pass filtering to obtain y1*h;
From the high-pass filtered undersampled multi-channel offline k-space data y1H and the fully sampled multi-channel offline multi-contrast magnetic resonance image x0Training a deep learning network, and respectively arranging the deep learning networks before and after parallel imaging as shown in FIG. 3;
the k space data after two deep learning networks and parallel imaging processing is y1' h, pair y1' inverse high-pass filtering and data consistency correction are carried out, and k space data at corresponding positions are replaced by k space data obtained by sampling, so that the deep learning network is ensured to be only filled with the k space data which are not sampled;
performing inverse Fourier transform and root mean square operation on the k space data after data consistency correction to obtain a final offline reconstructed magnetic resonance image;
acquiring multi-channel undersampled k-space data y of object to be detected2
For the multi-channel undersampled k-space data y2High-pass filtering to obtain y2*h;
High-pass filtering the multi-channel undersampled k-space data y of the object to be detected2Inputting h into the trained deep learning network to obtain k space data y after k space filling2’*h;
Carrying out inverse high-pass filtering on the k space data to obtain reconstructed k space data y2’;
For the reconstructed k-space data y2' inverse Fourier transform and root mean square operations are performed to obtain an in-line magnetic resonance image.
6. The MRI-based CT image reconstruction method of claim 5, wherein: the parallel imaging method is one of GRAPPA or SPIRiT.
7. The MRI-based CT image reconstruction method according to any one of claims 1 to 3, wherein: the deep learning network in the step 1) is composed of a k-space domain U-Net and an image domain U-Net, undersampled off-line k-space data is firstly input into the k-space domain U-Net, then data consistency correction is carried out, a magnetic resonance image is obtained through inverse Fourier transformation, the image domain U-Net is input, then Fourier transformation is carried out to obtain k-space data, and data consistency correction is carried out; the data consistency correction ensures that the deep learning network only fills the non-sampled k-space data by replacing the k-space data at the corresponding position with the k-space data obtained by sampling.
CN202010355883.3A 2020-04-29 2020-04-29 MRI-based CT image reconstruction method Active CN111436936B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010355883.3A CN111436936B (en) 2020-04-29 2020-04-29 MRI-based CT image reconstruction method
CN202110770801.6A CN113470139B (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010355883.3A CN111436936B (en) 2020-04-29 2020-04-29 MRI-based CT image reconstruction method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110770801.6A Division CN113470139B (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Publications (2)

Publication Number Publication Date
CN111436936A true CN111436936A (en) 2020-07-24
CN111436936B CN111436936B (en) 2021-07-27

Family

ID=71657717

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010355883.3A Active CN111436936B (en) 2020-04-29 2020-04-29 MRI-based CT image reconstruction method
CN202110770801.6A Active CN113470139B (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110770801.6A Active CN113470139B (en) 2020-04-29 2020-04-29 CT image reconstruction method based on MRI

Country Status (1)

Country Link
CN (2) CN111436936B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150568A (en) * 2020-09-16 2020-12-29 浙江大学 Magnetic Resonance Fingerprint Reconstruction Method Based on Transformer Model
CN112700508A (en) * 2020-12-28 2021-04-23 广东工业大学 Multi-contrast MRI image reconstruction method based on deep learning
CN112862738A (en) * 2021-04-09 2021-05-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Multi-modal image synthesis method and device, storage medium and processor
CN113470139A (en) * 2020-04-29 2021-10-01 浙江大学 CT image reconstruction method based on MRI
CN114119791A (en) * 2020-08-28 2022-03-01 中原工学院 MRI (magnetic resonance imaging) undersampled image reconstruction method based on cross-domain iterative network

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549681A (en) * 2022-02-25 2022-05-27 清华大学 An image generation method, device, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106970343A (en) * 2017-04-11 2017-07-21 深圳先进技术研究院 A kind of MR imaging method and device
US20190066281A1 (en) * 2017-08-24 2019-02-28 Siemens Healthcare Gmbh Synthesizing and Segmenting Cross-Domain Medical Images
CN110084863A (en) * 2019-04-25 2019-08-02 中山大学 A kind of multiple domain image conversion method and system based on generation confrontation network
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A medical image segmentation method, system and electronic device based on generative confrontation network
CN110689561A (en) * 2019-09-18 2020-01-14 中山大学 Conversion method, system and medium of multi-modal MRI and multi-modal CT based on modular GAN

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110074813B (en) * 2019-04-26 2022-03-04 深圳大学 Ultrasonic image reconstruction method and system
CN110270015B (en) * 2019-05-08 2021-03-09 中国科学技术大学 sCT generation method based on multi-sequence MRI
CN110827369B (en) * 2019-10-31 2023-09-26 上海联影智能医疗科技有限公司 Undersampling model generation method, image reconstruction method, equipment and storage medium
CN111047660B (en) * 2019-11-20 2022-01-28 深圳先进技术研究院 Image reconstruction method, device, equipment and storage medium
CN110992440B (en) * 2019-12-10 2023-04-21 中国科学院深圳先进技术研究院 Weakly supervised magnetic resonance fast imaging method and device
CN111436936B (en) * 2020-04-29 2021-07-27 浙江大学 MRI-based CT image reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106970343A (en) * 2017-04-11 2017-07-21 深圳先进技术研究院 A kind of MR imaging method and device
US20190066281A1 (en) * 2017-08-24 2019-02-28 Siemens Healthcare Gmbh Synthesizing and Segmenting Cross-Domain Medical Images
CN110084863A (en) * 2019-04-25 2019-08-02 中山大学 A kind of multiple domain image conversion method and system based on generation confrontation network
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A medical image segmentation method, system and electronic device based on generative confrontation network
CN110689561A (en) * 2019-09-18 2020-01-14 中山大学 Conversion method, system and medium of multi-modal MRI and multi-modal CT based on modular GAN

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470139A (en) * 2020-04-29 2021-10-01 浙江大学 CT image reconstruction method based on MRI
CN114119791A (en) * 2020-08-28 2022-03-01 中原工学院 MRI (magnetic resonance imaging) undersampled image reconstruction method based on cross-domain iterative network
CN112150568A (en) * 2020-09-16 2020-12-29 浙江大学 Magnetic Resonance Fingerprint Reconstruction Method Based on Transformer Model
CN112700508A (en) * 2020-12-28 2021-04-23 广东工业大学 Multi-contrast MRI image reconstruction method based on deep learning
CN112700508B (en) * 2020-12-28 2022-04-19 广东工业大学 A Deep Learning-Based Multi-Contrast MRI Image Reconstruction Method
CN112862738A (en) * 2021-04-09 2021-05-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Multi-modal image synthesis method and device, storage medium and processor
CN112862738B (en) * 2021-04-09 2024-01-16 福建自贸试验区厦门片区Manteia数据科技有限公司 Method and device for synthesizing multi-mode image, storage medium and processor

Also Published As

Publication number Publication date
CN113470139A (en) 2021-10-01
CN111436936B (en) 2021-07-27
CN113470139B (en) 2024-08-06

Similar Documents

Publication Publication Date Title
CN111436936B (en) MRI-based CT image reconstruction method
US10852376B2 (en) Magnetic resonance imaging method and device
Poulin et al. Tractography and machine learning: Current state and open challenges
Feng et al. MRSIGMA: Magnetic Resonance SIGnature MAtching for real‐time volumetric imaging
Ning et al. Sparse Reconstruction Challenge for diffusion MRI: Validation on a physical phantom to determine which acquisition scheme and analysis method to use?
CN111951344B (en) A Reconstruction Method of Magnetic Resonance Image Based on Cascaded Parallel Convolutional Networks
US12000918B2 (en) Systems and methods of reconstructing magnetic resonance images using deep learning
JP7757314B2 (en) Correction of magnetic resonance images using multiple magnetic resonance imaging system configurations
Zhang et al. Can signal-to-noise ratio perform as a baseline indicator for medical image quality assessment
WO2017048856A1 (en) Simultaneous ct-mri image reconstruction
DE112016004907T5 (en) Virtual CT images from magnetic resonance images
US9402562B2 (en) Systems and methods for improved tractographic processing
CN106970343A (en) A kind of MR imaging method and device
Ikeda et al. Compressed sensing and parallel imaging accelerated T2 FSE sequence for head and neck MR imaging: comparison of its utility in routine clinical practice
CN115908610A (en) Method for obtaining attenuation correction coefficient image based on single-mode PET image
CN104013403A (en) Three-dimensional heart magnetic resonance imaging method based on tensor composition sparse bound
Eyre et al. Simultaneous multi-parametric acquisition and reconstruction techniques in cardiac magnetic resonance imaging: basic concepts and status of clinical development
CN108780134B (en) System, method and apparatus for susceptibility mapping of moving objects
Kim et al. Deep learning-based k-space-to-image reconstruction and super resolution for diffusion-weighted imaging in whole-spine MRI
Wake et al. Medical imaging technologies and imaging considerations for 3D printed anatomic models
Nishioka et al. Enhancing the image quality of prostate diffusion-weighted imaging in patients with prostate cancer through model-based deep learning reconstruction
Xiao et al. Highly and adaptively undersampling pattern for pulmonary hyperpolarized 129 Xe dynamic MRI
Papp et al. Deep learning for improving ZTE MRI images in free breathing
JP6730995B2 (en) Method and system for generating MR image of moving object in environment
Tsai et al. Free‐breathing MOLLI: Application to myocardial T1 mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant