CN111898324B - A segmentation task-assisted three-dimensional dose distribution prediction method for nasopharyngeal carcinoma - Google Patents
A segmentation task-assisted three-dimensional dose distribution prediction method for nasopharyngeal carcinoma Download PDFInfo
- Publication number
- CN111898324B CN111898324B CN202010814307.0A CN202010814307A CN111898324B CN 111898324 B CN111898324 B CN 111898324B CN 202010814307 A CN202010814307 A CN 202010814307A CN 111898324 B CN111898324 B CN 111898324B
- Authority
- CN
- China
- Prior art keywords
- network
- perform
- prediction
- image
- dose distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及及一种基于分割任务辅助的鼻咽癌三维剂量分布预测方法,具体包括:采集原始鼻咽癌图像并对其进行标注;构建剂量分布预测模型,所述预测模型包括辅助分割网络、剂量预测网络和对抗网络;分割网络和预测网络共享编码器网络参数,通过联合训练分割任务和剂量预测任务来获取两者之间的共享表示信息,增强共享编码器的特征表达能力,促使网络在有限训练数据下最大限度地挖掘分割任务中对剂量预测有辅助功能的本质特征。同时,为了有效利用预测解码器不同尺度下的特征信息,本发明在预测任务解码器端提出一种多尺度迭代融合IMF策略,获得更加精准的预测结果。
The invention relates to a three-dimensional dose distribution prediction method for nasopharyngeal carcinoma based on segmentation task assistance, which specifically includes: collecting original nasopharyngeal carcinoma images and labeling them; building a dose distribution prediction model, the prediction model comprising an auxiliary segmentation network, The dose prediction network and the adversarial network; the segmentation network and the prediction network share the encoder network parameters, and the shared representation information between the two is obtained by jointly training the segmentation task and the dose prediction task, enhancing the feature expression ability of the shared encoder, and promoting the network in The essential features of the segmentation task that are helpful for dose prediction are maximized under limited training data. At the same time, in order to effectively utilize the feature information of the prediction decoder at different scales, the present invention proposes a multi-scale iterative fusion IMF strategy at the prediction task decoder side to obtain more accurate prediction results.
Description
技术领域technical field
本发明涉及图像处理领域,尤其涉及一种基于分割任务辅助的鼻咽癌三维剂量分布预测方法。The invention relates to the field of image processing, in particular to a three-dimensional dose distribution prediction method for nasopharyngeal carcinoma based on segmentation task assistance.
背景技术Background technique
医学图像分割作为图像处理的一个重要领域,该领域的研究目的是分割医学图像中患者病情的重要信息,为临床诊断和治疗提供科学依据,这对于医生对病情的判断具有重要价值。在肿瘤治疗中,放射治疗是最有效且对患者身体健康最有保障的一种治疗技术,其无创低毒的特性也受到了医学界的肯定。严格按照处方剂量进行放射治疗是治疗成功的关键所在。但是对于临床放射治疗过程中剂量的把控,是一个难度较大的课题,现有的研究成果还远远不足以临床应用。Medical image segmentation is an important field of image processing. The purpose of this field is to segment the important information of the patient's condition in medical images and provide a scientific basis for clinical diagnosis and treatment, which is of great value for doctors to judge the condition. In tumor treatment, radiotherapy is the most effective and safest treatment technology for patients' health, and its non-invasive and low-toxicity characteristics have also been recognized by the medical community. Adhering to the prescribed dose of radiation therapy is the key to successful treatment. However, the dose control in the clinical radiotherapy process is a difficult subject, and the existing research results are far from enough for clinical application.
尽管对肿瘤放疗的剂量分布进行预测十分困难,但还是有许多效果显著研究成果。基于危机器官DVH(dose-volume histogram)的指征项,wu等人提出了与之具有强相关性的新概念OVH(overlap volume histogram)。在临床经验的指导下,提出了离靶区距离越远,体素所受剂量应越低的假设。在服从该假设的前提下,根据某一器官OVH比较情况,找到相应的上(下)限,但是该模型较为主观和粗糙。Zhu等人在某一器官的DTH(distance-to-target histogram)与对应的DVH之间建立起了多变量非线性关联模型,则新患者可以通过代入患病器官的DTH至关联模型从而得到该器官的DVH预测。该模型在一定程度上能够得到较好的预测结果。但是除了DTH外,器官的体积与靶区的体积也是影响DVH的重要因素,而且需要解决数据量少所导致的过拟合问题。孔繁图在其硕士论文中提出了基于机器学习方法支持向量回归(support vector regression,SVR)与人工神经网络(artificial neuralnetwork,ANN)建立患者几何结构和器官三维剂量分布之间的关联模型。但是没有考虑靶区及危及器官的区别(即分割出危及器官和靶区)以及剂量预测过程中的评价指标较少,只使用了三维剂量分布误差和器官DVH的差异。Although it is very difficult to predict the dose distribution of tumor radiotherapy, there are many research results with significant effect. Based on the indications of the crisis organ DVH (dose-volume histogram), Wu et al. proposed a new concept OVH (overlap volume histogram) which has a strong correlation with it. Under the guidance of clinical experience, it is proposed that the further the distance from the target volume, the lower the dose to the voxel should be. Under the premise of obeying this assumption, according to the OVH comparison of a certain organ, the corresponding upper (lower) limit is found, but the model is relatively subjective and rough. Zhu et al. established a multivariate nonlinear correlation model between the DTH (distance-to-target histogram) of a certain organ and the corresponding DVH, and new patients can obtain the correlation model by substituting the DTH of the diseased organ into the correlation model. DVH prediction of organs. The model can obtain better prediction results to a certain extent. However, in addition to DTH, the volume of the organ and the volume of the target volume are also important factors affecting DVH, and the problem of overfitting caused by the small amount of data needs to be solved. In his master's thesis, Kong Fantu proposed a machine learning method based on support vector regression (SVR) and artificial neural network (artificial neural network, ANN) to establish an association model between patient geometry and organ 3D dose distribution. However, the difference between the target volume and the organ at risk (that is, the segmentation of the organ at risk and the target volume) is not considered, and there are few evaluation indicators in the process of dose prediction. Only the difference of the three-dimensional dose distribution error and the DVH of the organ is used.
随着医学成像技术不断发展,基于传统机器学习和深度神经网络对医学图像靶区勾画的算法的研究成果日益丰硕。尤其是在肿瘤图像的处理中,分割肿瘤区域的技术已经取得了显著的成就。在肿瘤治疗方面,放疗是当前肿瘤治疗的主要手段之一,具有低毒无创等优势。放射治疗不但有治愈或控制肿瘤的最大可能性,还能让患者愈后生活质量得到保障。为了提高临床放射治疗的准确性和安全性,研究靶区器官和其周边危及器官(organ atrisk,OAR)放疗剂量分布具有重要意义。目前国内关于通过分割任务辅助三维剂量分布预测的研究成果较少,现有的剂量分布预测算法要么基于传统的机器学习,要么采用原CT图像和所分割的解剖结构作为先验信息,仅仅使用深度神经网络做剂量分布预测,没有考虑分割任务对预测任务的指导作用。With the continuous development of medical imaging technology, the research results of algorithms for delineating medical image target areas based on traditional machine learning and deep neural networks are increasingly fruitful. Especially in the processing of tumor images, the techniques of segmenting tumor regions have made remarkable achievements. In terms of tumor treatment, radiotherapy is one of the main methods of current tumor treatment, with the advantages of low toxicity and non-invasiveness. Radiation therapy not only has the greatest possibility of curing or controlling the tumor, but also guarantees the patient's quality of life after recovery. In order to improve the accuracy and safety of clinical radiotherapy, it is of great significance to study the radiation dose distribution of target organs and their surrounding organs at risk (OAR). At present, there are few domestic research results on assisted 3D dose distribution prediction through segmentation tasks. The existing dose distribution prediction algorithms are either based on traditional machine learning, or use the original CT image and the segmented anatomical structure as prior information, and only use depth The neural network does dose distribution prediction without considering the guiding role of the segmentation task on the prediction task.
目前国内外学者在研究三维剂量预测分布的相关工作时,大都采用原CT图像和所分割的解剖结构作为先验信息,来挖掘解剖结构与剂量分布之间的特有内在关系。然而,解剖结构的分割与剂量分布的预测这两个任务实则是相互关联、相辅相成的。目前,关于利用分割任务辅助做三维剂量预测任务的研究尚为空白。已有的三维剂量分布预测研究成果大都基于深度神经网络实现。具体而言,Dan Nguyen等人提出了HD U-net(HierarchicallyDensely Connected U-net),将头颈部的三维图像块按照96×96×64的大小输入,经过两次密集卷积(Dense convolution),密集卷积有效地使用了带有ReLU激活的标准卷积,然后与前一组特征图拼接。U-net的每一层连续执行两次此操作,然后对生成特征图集合经过密集下采样(Dense downsample),密集下采样操作是由一个步长为2的卷积和ReLU来计算得到一组新的具有输入特征图一半分辨率的特征图,原特征图集随后被max pooled并与新的特征图集拼接。重复上述操作直至U-net的最后一层,对最底层特征图集合进行上采样,然后与对应层的经过密集下采样得到的特征图通过跳跃连接(skip-connection)进行拼接。重复此操作到网络第一层,再经过两次密集连接后,最后通过一个3×3×3的标准卷积后得到剂量分布预测结果。这种方法获得了比较好的实验结果。Vasant Kearney等人提出了DoseNet预测模型对前列腺放疗患者的剂量分布进行预测,DoseNet是基于U-net结构的全卷积神经网络。DoseNet接受六个三维矩阵作为第一卷积层的输入。三维CT、前列腺、膀胱、阴茎球、尿道和直肠体作为独特的输入通道输入网络。所有输入通道在训练前分别作为6个单独的组进行规范化。第一卷积层的输入通道大小为128×128×64×6,输出通道大小为128×128×64×16。然后对上文输出进行一系列卷积下采样,在每个步骤中,卷积下采样在连续减小前三维尺寸的同时,将第四维尺寸增加2倍。这使得网络能够在CT图像的一个大的接收场上学习层次关系。相应地,上采样步骤依次恢复原始维数,并继续学习输入数据中的非线性关系。一旦网络恢复到原始维度,最后一个卷积层预测一个通道的三维剂量分布矩阵。这两种方法都利用了深度神经网络实现了对医学图像的放疗剂量分布预测,并且获得了显著的实验效果。但是这两种预测模型也有各自的缺陷,前者没有考虑靶区周边危及器官的影响,后者不能够有效利用肿瘤和危及器官的解剖结构特征。At present, domestic and foreign scholars mostly use the original CT image and the segmented anatomical structure as prior information to explore the unique internal relationship between the anatomical structure and the dose distribution when studying the related work of 3D dose distribution. However, the two tasks of anatomical structure segmentation and dose distribution prediction are actually interrelated and complementary. At present, the research on the use of segmentation tasks to assist in 3D dose prediction tasks is still blank. Most of the existing research results of 3D dose distribution prediction are based on deep neural networks. Specifically, Dan Nguyen et al. proposed HD U-net (Hierarchically Densely Connected U-net), which inputs the 3D image blocks of the head and neck according to the size of 96 × 96 × 64, and undergoes two dense convolutions (Dense convolution) , Dense convolutions effectively use standard convolutions with ReLU activations, which are then stitched with the previous set of feature maps. Each layer of U-net performs this operation twice in a row, and then densely downsamples the generated feature map set. The dense downsampling operation is calculated by a convolution and ReLU with a stride of 2 to obtain a set of A new feature map with half the resolution of the input feature map, the original feature map set is then max pooled and concatenated with the new feature map set. Repeat the above operations until the last layer of U-net, up-sampling the lowest-level feature map set, and then splicing with the feature map obtained by dense down-sampling of the corresponding layer through skip-connection. Repeat this operation to the first layer of the network, after two dense connections, and finally get the dose distribution prediction result through a 3×3×3 standard convolution. This method has obtained better experimental results. Vasant Kearney et al. proposed a DoseNet prediction model to predict the dose distribution of prostate radiotherapy patients. DoseNet is a fully convolutional neural network based on U-net structure. DoseNet accepts six 3D matrices as input to the first convolutional layer. 3D CT, prostate, bladder, penile bulb, urethra, and rectal body were fed into the network as unique input channels. All input channels were normalized separately as 6 separate groups before training. The input channel size of the first convolutional layer is 128×128×64×6 and the output channel size is 128×128×64×16. The above output is then subjected to a series of convolutional downsampling, at each step convolutional downsampling increases the size of the fourth dimension by a factor of 2 while continuously reducing the size of the first three dimensions. This enables the network to learn hierarchical relationships over a large receptive field of CT images. Correspondingly, the upsampling step in turn restores the original dimensionality and continues to learn nonlinear relationships in the input data. Once the network is restored to its original dimensions, the last convolutional layer predicts a three-dimensional dose distribution matrix for one channel. Both methods utilize deep neural networks to achieve radiation dose distribution prediction for medical images, and achieve significant experimental results. However, these two prediction models also have their own shortcomings. The former does not consider the influence of the organs at risk around the target area, and the latter cannot effectively utilize the anatomical features of tumors and organs at risk.
发明内容SUMMARY OF THE INVENTION
针对现有技术之不足,本发明提出了一种基于分割任务辅助的鼻咽癌三维剂量分布预测方法,以剂量分布预测为主任务,鼻咽癌分割任务为辅任务,通过多任务联合学习的方式实现分割任务对预测任务的指导,从而提高剂量分布预测的准确性。所述方法包括以下步骤:In view of the deficiencies of the prior art, the present invention proposes a three-dimensional dose distribution prediction method for nasopharyngeal carcinoma based on segmentation task assistance. The dose distribution prediction is the main task, and the nasopharyngeal carcinoma segmentation task is the auxiliary task. In this way, the segmentation task can guide the prediction task, thereby improving the accuracy of dose distribution prediction. The method includes the following steps:
步骤1:采集原始鼻咽癌图像,对原始鼻咽癌图像进行预处理,包括放疗剂量以及器官轮廓标注;Step 1: Collect the original nasopharyngeal carcinoma image, and preprocess the original nasopharyngeal carcinoma image, including radiotherapy dose and organ outline annotation;
步骤2:构建剂量分布预测模型,所述预测模型包括辅助分割网络、剂量预测网络和对抗网络;Step 2: constructing a dose distribution prediction model, the prediction model includes an auxiliary segmentation network, a dose prediction network and an adversarial network;
步骤3:将标注后的原始鼻咽癌图像分为大小为64×64×64的图像块集合,每一块用Ii表示,i=1,2,…n,将Ii输入到辅助分割网络中进行分割训练;Step 3: Divide the labeled original nasopharyngeal cancer image into a set of image blocks with a size of 64×64×64, each block is represented by I i , i=1, 2,...n, and input I i to the auxiliary segmentation network split training in
步骤4:将原始鼻咽癌图像分为大小为64×64×64的图像块集合,每一块用Ij表示,j=1,2,…n,将Ij输入到剂量预测网络中,同时,将辅助分割网络学习到的参数共享到剂量预测网络中,在辅助分割网络的辅助下,进行以下具体操作:Step 4: Divide the original nasopharyngeal carcinoma image into a set of image blocks with a size of 64 × 64 × 64, each block is represented by I j , j=1, 2, ... n, input I j into the dose prediction network, and at the same time , share the parameters learned by the auxiliary segmentation network to the dose prediction network, and perform the following specific operations with the assistance of the auxiliary segmentation network:
步骤41:用大小为5×5×5,步长为1的卷积核对Ij进行卷积处理,然后对卷积后的输出图像做步长为2的平均池化处理得到Ij1;Step 41: Convolve I j with a convolution kernel with a size of 5×5×5 and a stride of 1, and then perform an average pooling process on the convolved output image with a stride of 2 to obtain I j1 ;
步骤42:对Ij1进行3×3×3步长为2的卷积处理,然后对卷积后的图像做步长为2的平均池化处理得到特征图Ij2;Step 42: perform 3×3×3 convolution processing on I j1 with a step size of 2, and then perform an average pooling processing on the convolved image with a step size of 2 to obtain a feature map I j2 ;
步骤43:用大小为3×3×3,步长为1的卷积核对Ij2进行卷积处理,得到深层特征图Ij3;Step 43: perform convolution processing on I j2 with a convolution kernel with a size of 3×3×3 and a stride of 1 to obtain a deep feature map I j3 ;
步骤44:对深层特征图Ij3做3×3×3,步长为1的逆卷积处理,得到特征图ID1,为了充分利用空间域信息,用跨层连接将ID1和Ij3连接起来得到IE1;Step 44: Perform 3×3×3 deconvolution processing on the deep feature map I j3 with a stride of 1 to obtain the feature map I D1 . In order to make full use of the spatial domain information, I D1 and I j3 are connected by a cross-layer connection get up to get I E1 ;
步骤45:对特征图IE1进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到ID2,为了充分利用空间域信息,用跨层连接将ID2和Ij2连接起来得到IE2;Step 45: Upsampling the feature map I E1 with a sampling scale of 2, and then performing 3×3×3 deconvolution processing with a stride of 1 to obtain I D2 . In order to make full use of the spatial domain information, use cross-layer connections Connect I D2 and I j2 to obtain I E2 ;
步骤46:对特征图IE2进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到ID3。为了充分利用空间域信息,用跨层连接将ID3和Ij1连接起来得到IE3;Step 46: Upsampling the feature map I E2 with a sampling scale of 2, and then performing a deconvolution process of 3×3×3 and a stride of 1 to obtain I D3 . In order to make full use of the spatial domain information, I D3 and I j1 are connected by cross-layer connection to obtain I E3 ;
步骤5:通过多尺度迭代融合策略将IE1,IE2,IE3融合得到预测剂量分布图像IE;Step 5: Integrate IE1 , IE2 , IE3 through a multi-scale iterative fusion strategy to obtain a predicted dose distribution image IE ;
步骤6:将真实剂量分布图像对和预测剂量分布图像对分别从不同通道输入到对抗网络中,通过对抗网络用来辨别预测剂量分布图和真实剂量分布图,进行对抗网络的训练,具体包括步骤:Step 6: Input the real dose distribution image pair and the predicted dose distribution image pair into the adversarial network from different channels, and use the adversarial network to distinguish the predicted dose distribution map and the real dose distribution map, and train the adversarial network, which includes steps. :
步骤61:对所述两个图像对分别用大小为5×5×5,步长为2的卷积核进行卷积处理,然后对输出的图像做Maxout处理,再经过2×2×2的池化处理;Step 61: Perform convolution processing on the two image pairs with a convolution kernel with a size of 5×5×5 and a stride of 2, and then perform Maxout processing on the output image, and then go through a 2×2×2 pooling;
步骤62:对步骤62输出的特征图进行卷积核大小为3×3×3,步长为2的卷积处理,然后对输出图像做Maxout处理,再经过2×2×2的池化处理;Step 62: Perform convolution processing with a convolution kernel size of 3 × 3 × 3 and a stride of 2 on the feature map output in Step 62, and then perform Maxout processing on the output image, and then go through 2 × 2 × 2 pooling processing ;
步骤63:最后将两个图像对对应的通道结合在一起,分别进行1×1×1的卷积处理,然后经过Sigmoid激活对其进行分类,从而判别图相对的真假;Step 63: Finally, combine the corresponding channels of the two image pairs, perform 1×1×1 convolution processing respectively, and then classify them through sigmoid activation, so as to discriminate the relative authenticity of the images;
步骤7:重复步骤3至步骤6,当对抗网络不能判别图像对真假,即对抗损失收敛于0.5时,剂量分布预测模型训练完成。Step 7: Repeat steps 3 to 6. When the adversarial network cannot discriminate between true and false image pairs, that is, when the adversarial loss converges to 0.5, the training of the dose distribution prediction model is completed.
根据一种优选的实施方式,步骤3的分割步骤具体包括:According to a preferred embodiment, the segmentation step of step 3 specifically includes:
步骤31:用大小为5×5×5,步长为1的卷积核对Ii进行卷积处理,然后对卷积后的输出图像做步长为2的平均池化处理得到Ii1;Step 31: Convolve I i with a convolution kernel with a size of 5×5×5 and a stride of 1, and then perform an average pooling process on the convolved output image with a stride of 2 to obtain I i1 ;
步骤32:对Ii1进行3×3×3,步长为2的卷积处理,然后对输出图像做步长为2的平均池化处理得到Ii2;Step 32: Perform 3×3×3 convolution processing on I i1 with a step size of 2, and then perform an average pooling process on the output image with a step size of 2 to obtain I i2 ;
步骤33:用大小为3×3×3,步长为1的卷积核对Ii2进行卷积处理,得到深层特征图Ii3;Step 33: perform convolution processing on I i2 with a convolution kernel with a size of 3×3×3 and a stride of 1 to obtain a deep feature map I i3 ;
步骤34:对深层特征图Ii3进行大小为3×3×3,步长为1的逆卷积处理得到特征图IF1;Step 34: perform deconvolution processing on the deep feature map I i3 with a size of 3×3×3 and a step size of 1 to obtain the feature map I F1 ;
步骤35:对特征图IF1进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到特征图IF2;Step 35: Upsampling the feature map IF1 with a sampling scale of 2, and then performing deconvolution processing of 3×3×3 with a step size of 1 to obtain the feature map IF2 ;
步骤36:对特征图IF2进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到特征图IF3;Step 36: Upsampling the feature map IF2 with a sampling scale of 2, and then performing a deconvolution process of 3×3×3 with a step size of 1 to obtain the feature map IF3 ;
步骤37:对特征图IF3进行采样规模为2的上采样,然后进行5×5×5,步长为1的逆卷积处理,得到分割结果,保存整个分割过程中的网络模型参数。Step 37: Upsampling the feature map IF3 with a sampling scale of 2, and then performing 5×5×5 deconvolution processing with a step size of 1 to obtain a segmentation result, and save the network model parameters in the entire segmentation process.
本发明的有益效果在于:The beneficial effects of the present invention are:
1、与传统机器学习和人工神经网络预测剂量分布的方法相比,本发明对图像的特征提取能力更强,所用时间更少,实验效果更好。1. Compared with the traditional method of machine learning and artificial neural network for predicting dose distribution, the present invention has stronger ability to extract image features, takes less time, and has better experimental effect.
2、,与现有的利用深度神经网络直接对医学图像进行三维剂量分布预测的方法相比,本发明使用辅助分割网络,可以充分利用肿瘤和OAR的解剖结构特征与剂量学特征之间存在的关联性,从而得到更加精细的预测结果。2. Compared with the existing method of directly predicting the three-dimensional dose distribution of medical images using deep neural networks, the present invention uses an auxiliary segmentation network, which can make full use of the existing differences between the anatomical structure features and dosimetry features of tumors and OARs. correlation, so as to obtain more refined prediction results.
3、本发明通过使用对抗网络,可以得到效果接近于放疗物理师所设计计划的剂量分布结果。3. By using an adversarial network, the present invention can obtain a dose distribution result whose effect is close to the plan designed by a radiotherapy physicist.
附图说明Description of drawings
图1是本发明剂量分布预测模型结构图;Fig. 1 is the structure diagram of dose distribution prediction model of the present invention;
图2是本发明实验结果图;Fig. 2 is the experimental result figure of the present invention;
图2(a)是真实剂量分布图像;和Figure 2(a) is a real dose distribution image; and
图2(b)是预测剂量分布图像。Figure 2(b) is a predicted dose distribution image.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚明了,下面结合具体实施方式并参照附图,对本发明进一步详细说明。应该理解,这些描述只是示例性的,而并非要限制本发明的范围。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本发明的概念。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the specific embodiments and the accompanying drawings. It should be understood that these descriptions are exemplary only and are not intended to limit the scope of the invention. Also, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily obscuring the concepts of the present invention.
下面结合附图进行详细说明。The following detailed description is given in conjunction with the accompanying drawings.
本发明解决的是医学图像领域中的鼻咽癌放射治疗的剂量分布预测问题。对肿瘤图像的准确分割可以帮助医生精准地勾画肿瘤轮廓以确定放疗靶区,对肿瘤图像的放疗剂量分布预测能够保障临床放射治疗的精准性和安全性。The invention solves the problem of dose distribution prediction of radiotherapy for nasopharyngeal carcinoma in the field of medical images. Accurate segmentation of tumor images can help doctors accurately outline the tumor contour to determine the target area of radiotherapy, and the prediction of radiotherapy dose distribution on tumor images can ensure the accuracy and safety of clinical radiotherapy.
为了充分利用肿瘤和危及器官的解剖结构特征与剂量学特征之间存在的关联性,本发明拟提出一种基于分割任务辅助的三维剂量分布预测方法。此方法以剂量分布预测为主任务,鼻咽癌分割任务为辅任务,通过多任务联合学习的方式实现分割任务对预测任务的指导,从而提高剂量分布预测的准确性。图1展示了所提方法的模型结构,包括三个网络:辅助分割网络SegNet,剂量预测网络PreNet和对抗网络AdvNet。分割网络和预测网络共享编码器网络参数,通过联合训练分割任务和剂量预测任务来获取两者之间的共享表示信息,增强共享编码器的特征表达能力,促使网络在有限训练数据下最大限度地挖掘分割任务中对剂量预测有辅助功能的本质特征。同时,为了有效利用解码器不同尺度下的特征信息,本发明在预测任务解码器端提出一种多尺度迭代融合IMF策略。本发明提出一种基于分割任务辅助的鼻咽癌三维剂量分布预测方法,方法包括以下步骤:In order to make full use of the correlation between the anatomical structure features and dosimetric features of tumors and organs at risk, the present invention proposes a three-dimensional dose distribution prediction method assisted by segmentation tasks. This method takes dose distribution prediction as the main task and nasopharyngeal carcinoma segmentation task as the auxiliary task. The segmentation task guides the prediction task through multi-task joint learning, thereby improving the accuracy of dose distribution prediction. Figure 1 shows the model structure of the proposed method, including three networks: the auxiliary segmentation network SegNet, the dose prediction network PreNet and the adversarial network AdvNet. The segmentation network and the prediction network share the encoder network parameters, and the shared representation information between the two is obtained by jointly training the segmentation task and the dose prediction task, enhancing the feature expression ability of the shared encoder, and promoting the network to maximize the performance under limited training data. Mining essential features that are helpful for dose prediction in segmentation tasks. At the same time, in order to effectively utilize the feature information of the decoder at different scales, the present invention proposes a multi-scale iterative fusion IMF strategy on the decoder side of the prediction task. The present invention proposes a three-dimensional dose distribution prediction method for nasopharyngeal carcinoma based on segmentation task assistance, the method comprising the following steps:
步骤1:采集原始鼻咽癌图像,对原始鼻咽癌图像进行预处理,包括放疗剂量以及器官轮廓标注。Step 1: Collect the original nasopharyngeal carcinoma image, and preprocess the original nasopharyngeal carcinoma image, including radiotherapy dose and organ outline annotation.
步骤2:构建剂量分布预测模型,预测模型包括辅助分割网络、剂量预测网络和对抗网络。Step 2: Build a dose distribution prediction model, which includes an auxiliary segmentation network, a dose prediction network and an adversarial network.
步骤3:将标注后的原始鼻咽癌图像分为大小为64×64×64的图像块集合,每一块用Ii表示,i=1,2,…n,将Ii输入到辅助分割网络中进行分割训练。Step 3: Divide the labeled original nasopharyngeal cancer image into a set of image blocks with a size of 64×64×64, each block is represented by I i , i=1, 2,...n, and input I i to the auxiliary segmentation network for segmentation training.
步骤31:用大小为5×5×5,步长为1的卷积核对Ii进行卷积处理,然后对卷积后的输出图像做步长为2的平均池化处理得到Ii1。Step 31: Convolve I i with a convolution kernel with a size of 5×5×5 and a stride of 1, and then perform an average pooling process on the convolved output image with a stride of 2 to obtain I i1 .
步骤32:对Ii1进行3×3×3,步长为2的卷积处理,然后对输出图像做步长为2的平均池化处理得到Ii2。Step 32: Perform 3×3×3 convolution processing on I i1 with a stride of 2, and then perform an average pooling process on the output image with a stride of 2 to obtain I i2 .
步骤33:用大小为3×3×3,步长为1的卷积核对Ii2进行卷积处理,得到深层特征图Ii3。Step 33: Convolve I i2 with a convolution kernel with a size of 3×3×3 and a stride of 1 to obtain a deep feature map I i3 .
步骤34:对深层特征图Ii3进行大小为3×3×3,步长为1的逆卷积处理得到特征图IF1。Step 34: Perform deconvolution processing on the deep feature map I i3 with a size of 3×3×3 and a stride of 1 to obtain a feature map I F1 .
步骤35:对特征图IF1进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到特征图IF2。Step 35: Perform up-sampling with a sampling scale of 2 on the feature map IF1 , and then perform a deconvolution process of 3×3×3 with a stride of 1 to obtain the feature map IF2 .
步骤36:对特征图IF2进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到特征图IF3。Step 36: Perform up-sampling with a sampling scale of 2 on the feature map IF2 , and then perform deconvolution processing of 3×3×3 with a stride of 1 to obtain the feature map IF3 .
步骤37:对特征图IF3进行采样规模为2的上采样,然后进行5×5×5,步长为1的逆卷积处理,得到分割结果,保存整个分割过程中的网络模型参数。Step 37: Upsampling the feature map IF3 with a sampling scale of 2, and then performing 5×5×5 deconvolution processing with a step size of 1 to obtain a segmentation result, and save the network model parameters in the entire segmentation process.
步骤4:将原始鼻咽癌图像分为大小为64×64×64的图像块集合,每一块用Ij表示,j=1,2,…n,将Ij输入到剂量预测网络中,同时,将辅助分割网络学习到的参数共享到剂量预测网络中,在辅助分割网络的辅助下,进行以下具体操作:Step 4: Divide the original nasopharyngeal carcinoma image into a set of image blocks with a size of 64×64×64, each block is represented by I j , j=1, 2,...n, input I j into the dose prediction network, and at the same time , share the parameters learned by the auxiliary segmentation network to the dose prediction network, and perform the following specific operations with the assistance of the auxiliary segmentation network:
步骤41:用大小为5×5×5,步长为1的卷积核对Ij进行卷积处理,然后对卷积后的输出图像做步长为2的平均池化处理得到Ij1。Step 41: Convolve I j with a convolution kernel with a size of 5×5×5 and a stride of 1, and then perform an average pooling process on the convolved output image with a stride of 2 to obtain I j1 .
步骤42:对Ij1进行3×3×3步长为2的卷积处理,然后对卷积后的图像做步长为2的平均池化处理得到特征图Ij2。Step 42: Perform 3×3×3 convolution processing with stride 2 on I j1 , and then perform average pooling processing on the convolved image with stride 2 to obtain the feature map I j2 .
步骤43:用大小为3×3×3,步长为1的卷积核对Ij2进行卷积处理,得到深层特征图Ij3。Step 43: Convolve I j2 with a convolution kernel with a size of 3×3×3 and a stride of 1 to obtain a deep feature map I j3 .
步骤44:对深层特征图Ij3做3×3×3,步长为1的逆卷积处理,得到特征图ID1,为了充分利用空间域信息,用跨层连接将ID1和Ij3连接起来得到IE1;Step 44: Perform 3×3×3 deconvolution processing on the deep feature map I j3 with a stride of 1 to obtain the feature map I D1 . In order to make full use of the spatial domain information, I D1 and I j3 are connected by a cross-layer connection get up to get I E1 ;
步骤45:对特征图IE1进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到ID2,为了充分利用空间域信息,用跨层连接将ID2和Ij2连接起来得到IE2。Step 45: Upsampling the feature map I E1 with a sampling scale of 2, and then performing 3×3×3 deconvolution processing with a stride of 1 to obtain I D2 . In order to make full use of the spatial domain information, use cross-layer connections Connect I D2 and I j2 to get I E2 .
步骤46:对特征图IE2进行采样规模为2的上采样,然后进行3×3×3,步长为1的逆卷积处理,得到ID3。为了充分利用空间域信息,用跨层连接将ID3和Ij1连接起来得到IE3。Step 46: Upsampling the feature map I E2 with a sampling scale of 2, and then performing a deconvolution process of 3×3×3 and a stride of 1 to obtain I D3 . In order to make full use of the spatial domain information, I D3 and I j1 are connected by a cross-layer connection to obtain I E3 .
步骤5:通过多尺度迭代融合策略将IE1,IE2,IE3融合得到预测剂量分布图IE。Step 5: Integrate I E1 , I E2 , and I E3 through a multi-scale iterative fusion strategy to obtain a predicted dose distribution map I E .
具体而言,首先通过预测任务解码器端得到不同尺度下的预测结果,然后对最小尺度下的预测结果进行上采样以得到跟它下一尺度同样大小的图像尺寸,并将上采样后的预测结果与下一尺度的预测结果进行融合,按照这种方式逐层迭代融合,直到获得最大尺度下的融合图像,即为最终的预测剂量分布。多尺度迭代融合策略用公式可以表示为:Specifically, the prediction results at different scales are obtained at the decoder side of the prediction task, and then the prediction results at the smallest scale are up-sampled to obtain an image size of the same size as the next scale, and the up-sampled prediction results are obtained. The result is fused with the prediction result of the next scale, and iteratively fused layer by layer in this way until the fused image at the largest scale is obtained, which is the final predicted dose distribution. The multi-scale iterative fusion strategy can be expressed as:
其中,pS为当前尺度下的预测结果,pS+1为下一尺度的预测结果,Up(·)为上采样操作,为下一尺度融合后的结果;S表示为不同尺度特征图的数量。Among them, p S is the prediction result at the current scale, p S+1 is the prediction result of the next scale, Up( ) is the upsampling operation, is the result of fusion at the next scale; S is the number of feature maps at different scales.
步骤6:将真实剂量分布图像对和预测剂量分布图像对分别从不同通道输入到对抗网络中,通过对抗网络用来辨别预测剂量分布图和真实剂量分布图,进行对抗网络的训练;其中,真实剂量分布图像对包括原始鼻咽癌图像和真实剂量分布图像,真实剂量分布图像为经有经验的医师手工标注的图像,预测剂量分布图像对包括原始鼻咽癌图像和预测剂量分布图像,剂量预测分布图像为经过剂量预测网络输出的图像。Step 6: Input the real dose distribution image pair and the predicted dose distribution image pair into the adversarial network from different channels respectively, and use the adversarial network to distinguish the predicted dose distribution map and the real dose distribution map to train the adversarial network; The dose distribution image pair includes the original nasopharyngeal carcinoma image and the real dose distribution image. The real dose distribution image is an image manually annotated by an experienced physician. The predicted dose distribution image pair includes the original nasopharyngeal carcinoma image and the predicted dose distribution image. The dose prediction The distribution image is the image output by the dose prediction network.
步骤61:对两个图像对分别用大小为5×5×5,步长为2的卷积核进行卷积处理,然后对输出的图像做Maxout处理,再经过2×2×2的池化处理。Step 61: Convolve the two image pairs with a convolution kernel with a size of 5×5×5 and a stride of 2, and then perform Maxout processing on the output image, and then go through 2×2×2 pooling deal with.
步骤62:对步骤62输出的特征图进行卷积核大小为3×3×3,步长为2的卷积处理,然后对输出图像做Maxout处理,再经过2×2×2的池化处理。Step 62: Perform convolution processing with a convolution kernel size of 3 × 3 × 3 and a stride of 2 on the feature map output in Step 62, and then perform Maxout processing on the output image, and then go through 2 × 2 × 2 pooling processing .
步骤63:最后将两个图像对对应的通道结合在一起,分别进行1×1×1的卷积处理,然后经过Sigmoid激活对其进行分类,从而判别图相对的真假。Step 63: Finally, combine the corresponding channels of the two image pairs, perform 1×1×1 convolution processing respectively, and then classify them through Sigmoid activation, so as to discriminate the relative authenticity of the images.
步骤7:重复步骤3至步骤6,当对抗网络不能判别图像对真假,即对抗损失收敛于0.5时,剂量分布预测模型训练完成,预测网络能够生成与真实剂量分布图像接近的剂量分布预测图像。Step 7: Repeat steps 3 to 6. When the adversarial network cannot discriminate between true and false image pairs, that is, when the adversarial loss converges to 0.5, the training of the dose distribution prediction model is completed, and the prediction network can generate a dose distribution prediction image that is close to the real dose distribution image. .
三维剂量分布预测模型的损失函数由辅助分割网络Dice损失函数、对抗网络的损失函数和预测网络的损失函数构成。分别表示如下:The loss function of the three-dimensional dose distribution prediction model is composed of the auxiliary segmentation network Dice loss function, the adversarial network loss function and the prediction network loss function. They are respectively expressed as follows:
(1)辅助分割网络Dice损失函数 (1) Auxiliary segmentation network Dice loss function
其中,pred表示分割网络得到的分割结果,gt表示金标准。Among them, pred represents the segmentation result obtained by the segmentation network, and gt represents the gold standard.
(2)对抗网络的损失函数:(2) The loss function of the adversarial network:
其中,D(Y)表示对抗网络判别真实数据为真的概率,D(PreNet(I,S))表示对抗网络判别预测结果为真的概率。Among them, D(Y) represents the probability that the adversarial network discriminates the real data to be true, and D(PreNet(I, S)) represents the probability that the adversarial network discriminates the prediction result to be true.
(3)为了使得预测的剂量分布图逼近真实的剂量分布图,预测网络的损失包括L1范数的体素级损失函数和基于VGG网络的感知损失函数:(3) In order to make the predicted dose distribution map approximate to the real dose distribution map, the loss of the prediction network includes the voxel-level loss function of the L1 norm and the perceptual loss function based on the VGG network:
其中,Y表示真实数据,PreNet()表示预测结果,VGG(Y)表示真实数据输入VGG网络后得到的结果,VGG(PreNet(I,S))表示预测结果输入到VGG网络得到的结果。Among them, Y represents the real data, PreNet() represents the prediction result, VGG(Y) represents the result obtained after the real data is input into the VGG network, and VGG(PreNet(I, S)) represents the result obtained by inputting the prediction result to the VGG network.
最终,网络的损失函数为:其中,λ1和λ2为权值参数。Finally, the loss function of the network is: Among them, λ 1 and λ 2 are weight parameters.
通过使用共享的编码器网络对辅助分割任务的底层特征进行特征抽取,将对剂量预测任务有帮助的特征有效地迁移到预测任务中,提高了预测任务的表征提取能力。此外,通过多尺度加权策略来融合不同层级的特征信息,有望获得更加精准的预测结果。By using the shared encoder network to perform feature extraction on the underlying features of the auxiliary segmentation task, the features that are helpful for the dose prediction task are effectively transferred to the prediction task, and the representation extraction ability of the prediction task is improved. In addition, it is expected to obtain more accurate prediction results by fusing the feature information of different levels through a multi-scale weighting strategy.
在测试阶段得到剂量分布预测结果以后,在预测结果与金标准之间计算适形指数CI和异质性指数HI。金标准是指经过专业医生标注过的剂量分布图像,适形指数CI和质性指数HI是介于0到1的数。其中,计算得到的适形指数CI越接近于1说明预测结果越准确;质性指数HI越接近于0说明预测结果越准确。表1是本发明采用的评价指标的计算方式和含义。After the dose distribution prediction results were obtained in the testing phase, the conformity index CI and the heterogeneity index HI were calculated between the prediction results and the gold standard. The gold standard refers to the dose distribution images annotated by professional doctors. The conformal index CI and the quality index HI are numbers between 0 and 1. Among them, the closer the calculated conformity index CI is to 1, the more accurate the prediction result; the closer the quality index HI is to 0, the more accurate the prediction result is. Table 1 shows the calculation method and meaning of the evaluation index adopted in the present invention.
表1.评价指标Table 1. Evaluation Metrics
图2是本发明的实验结果图,其中图2(a)是真实剂量分布图像,图2(b)是预测的剂量分布图像,从两者对比可以看出,预测的剂量分布图像的轮廓和细节比较逼近真实剂量分布图像,能有效地将医学图像的三维剂量分布预测出来。Fig. 2 is a graph of the experimental results of the present invention, wherein Fig. 2(a) is the real dose distribution image, and Fig. 2(b) is the predicted dose distribution image. It can be seen from the comparison between the two that the contour of the predicted dose distribution image and the The details are relatively close to the real dose distribution image, which can effectively predict the three-dimensional dose distribution of medical images.
需要注意的是,上述具体实施例是示例性的,本领域技术人员可以在本发明公开内容的启发下想出各种解决方案,而这些解决方案也都属于本发明的公开范围并落入本发明的保护范围之内。本领域技术人员应该明白,本发明说明书及其附图均为说明性而并非构成对权利要求的限制。本发明的保护范围由权利要求及其等同物限定。It should be noted that the above-mentioned specific embodiments are exemplary, and those skilled in the art can come up with various solutions inspired by the disclosure of the present invention, and these solutions also belong to the disclosure scope of the present invention and fall within the scope of the present invention. within the scope of protection of the invention. It should be understood by those skilled in the art that the description of the present invention and the accompanying drawings are illustrative rather than limiting to the claims. The protection scope of the present invention is defined by the claims and their equivalents.
Claims (1)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010814307.0A CN111898324B (en) | 2020-08-13 | 2020-08-13 | A segmentation task-assisted three-dimensional dose distribution prediction method for nasopharyngeal carcinoma |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010814307.0A CN111898324B (en) | 2020-08-13 | 2020-08-13 | A segmentation task-assisted three-dimensional dose distribution prediction method for nasopharyngeal carcinoma |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111898324A CN111898324A (en) | 2020-11-06 |
| CN111898324B true CN111898324B (en) | 2022-06-28 |
Family
ID=73230362
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010814307.0A Expired - Fee Related CN111898324B (en) | 2020-08-13 | 2020-08-13 | A segmentation task-assisted three-dimensional dose distribution prediction method for nasopharyngeal carcinoma |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111898324B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112508961A (en) * | 2020-11-16 | 2021-03-16 | 苏州工业职业技术学院 | CT image segmentation method based on improved ResNet-Unet |
| CN112802032A (en) * | 2021-01-19 | 2021-05-14 | 上海商汤智能科技有限公司 | Training and image processing method, device, equipment and medium for image segmentation network |
| CN114036755B (en) * | 2021-11-10 | 2025-09-30 | 深圳市联影高端医疗装备创新研究院 | Effect prediction method, device, storage medium and electronic device |
| CN115359881B (en) * | 2022-10-19 | 2023-04-07 | 成都理工大学 | Nasopharyngeal carcinoma tumor automatic delineation method based on deep learning |
| CN116844734B (en) * | 2023-09-01 | 2024-01-16 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Method and device for generating dose prediction model, electronic equipment and storage medium |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
| CN109389584A (en) * | 2018-09-17 | 2019-02-26 | 成都信息工程大学 | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN |
| CN110197709A (en) * | 2019-05-29 | 2019-09-03 | 广州瑞多思医疗科技有限公司 | A kind of 3-dimensional dose prediction technique based on deep learning Yu priori plan |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7274810B2 (en) * | 2000-04-11 | 2007-09-25 | Cornell Research Foundation, Inc. | System and method for three-dimensional image rendering and analysis |
| US10304193B1 (en) * | 2018-08-17 | 2019-05-28 | 12 Sigma Technologies | Image segmentation and object detection using fully convolutional neural network |
| CN110354406A (en) * | 2019-07-30 | 2019-10-22 | 安徽大学 | A kind of the 3-dimensional dose prediction technique and system of radiotherapy |
-
2020
- 2020-08-13 CN CN202010814307.0A patent/CN111898324B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
| CN109389584A (en) * | 2018-09-17 | 2019-02-26 | 成都信息工程大学 | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN |
| CN110197709A (en) * | 2019-05-29 | 2019-09-03 | 广州瑞多思医疗科技有限公司 | A kind of 3-dimensional dose prediction technique based on deep learning Yu priori plan |
Non-Patent Citations (5)
| Title |
|---|
| Attention-enabled 3D boosted convolutional neural networks for semantic CT segmentation using deep supervision;VasantKearney 等;《Physics in Medicine and Biology》;20190702;1-9 * |
| Nasopharyngeal carcinoma segmentation based on enhanced convolutional neural networks using multi-modal metric learning;Zongqing Ma 等;《Physics in Medicine and Biology》;20180731;1-20 * |
| Role of Contrast-Enhanced Ultrasound (CEUS) in the Diagnosis of Cervical Lymph Node Metastasis in Nasopharyngeal Carcinoma (NPC) Patients;Wenwu Ling 等;《ORIGINAL RESEARCH》;20200717;1-7 * |
| 基于U-Net的三维医学图像分割方法研究;夏康力;《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》;20200215(第2期);E060-27 * |
| 基于神经网络学习方法的放疗计划三维剂量分布预测;孔繁图等;《南方医科大学学报》;20180627(第06期);51-58 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111898324A (en) | 2020-11-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111898324B (en) | A segmentation task-assisted three-dimensional dose distribution prediction method for nasopharyngeal carcinoma | |
| CN108717866B (en) | Method, device, equipment and storage medium for predicting radiotherapy plan dose distribution | |
| Ye et al. | Medical image diagnosis of prostate tumor based on PSP-Net+ VGG16 deep learning network | |
| CN106296699A (en) | Cerebral tumor dividing method based on deep neural network and multi-modal MRI image | |
| CN111767952B (en) | Interpretable lung nodule benign and malignant classification method | |
| CN115512110A (en) | Medical image tumor segmentation method related to cross-modal attention mechanism | |
| CN109215035B (en) | Brain MRI hippocampus three-dimensional segmentation method based on deep learning | |
| CN117274714A (en) | Tumor classification method based on multiplexing self-prediction segmentation result | |
| CN116758089A (en) | A system and method for intelligent delineation of cervical cancer clinical target areas and normal organs | |
| Du et al. | Segmentation and visualization of left atrium through a unified deep learning framework | |
| Li et al. | Automatic prostate and peri-prostatic fat segmentation based on pyramid mechanism fusion network for T2-weighted MRI | |
| CN118675731A (en) | Method, system, equipment and medium for predicting recurrence rate of head and neck squamous cell carcinoma based on multimodal fusion | |
| CN118691816A (en) | An attention-guided image segmentation method for organs at risk in cervical cancer | |
| CN118351315A (en) | A skin cancer image segmentation method, system, electronic device and storage medium based on probability diffusion | |
| ABOUDI et al. | A Hybrid Model for Ischemic Stroke Brain Segmentation from MRI Images using CBAM and ResNet50-UNet. | |
| CN116758350A (en) | Auxiliary breast cancer ultrasonic image diagnosis method based on artificial intelligence | |
| CN116485816A (en) | Weak supervision medical image segmentation method based on depth generation model | |
| Wang et al. | ABUS-Net: Graph convolutional network with multi-scale features for breast cancer diagnosis using automated breast ultrasound | |
| Hu et al. | Automatic detection of melanins and sebums from skin images using a generative adversarial network | |
| Naqvi et al. | An Attention-Based Residual U-Net for Tumour Segmentation Using Multi-Modal MRI Brain Images | |
| Zhu et al. | Squeeze‐and‐excitation‐attention‐based mobile vision transformer for grading recognition of bladder prolapse in pelvic MRI images | |
| CN115409837B (en) | A CTV automatic delineation method for endometrial cancer based on multimodal CT images | |
| Nagarajan et al. | Ensemble Transfer Learning-Based Convolutional Neural Network for Kidney Segmentation | |
| CN115619810B (en) | A prostate segmentation method, system and equipment | |
| CN111582330A (en) | Integrated ResNet-NRC method for dividing sample space based on lung tumor image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220628 |