[go: up one dir, main page]

CN111815569A - Image segmentation method, device, device and storage medium based on deep learning - Google Patents

Image segmentation method, device, device and storage medium based on deep learning Download PDF

Info

Publication number
CN111815569A
CN111815569A CN202010544386.8A CN202010544386A CN111815569A CN 111815569 A CN111815569 A CN 111815569A CN 202010544386 A CN202010544386 A CN 202010544386A CN 111815569 A CN111815569 A CN 111815569A
Authority
CN
China
Prior art keywords
segmentation
image data
image
model
image segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010544386.8A
Other languages
Chinese (zh)
Other versions
CN111815569B (en
Inventor
曹桂平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202010544386.8A priority Critical patent/CN111815569B/en
Publication of CN111815569A publication Critical patent/CN111815569A/en
Application granted granted Critical
Publication of CN111815569B publication Critical patent/CN111815569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

本发明实施例公开了一种基于深度学习的图像分割方法、装置、终端设备和存储介质。基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U‑Net框架的中心层添加与所述U‑Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U‑Net框架的编码器分支进行正则约束;将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。通过对U‑Net网络中的编码器权重进行额外的约束,可以提取到图像数据中更加鲁棒的特征。

Figure 202010544386

Embodiments of the present invention disclose a deep learning-based image segmentation method, device, terminal device and storage medium. A training set is obtained based on the segmentation and marking operation on the sample image data in the organ image data, and the organ image data is obtained by scanning the organs; an initial image segmentation model is generated, and the training set is input into the initial image segmentation model Model training is performed to obtain an image segmentation model for image structure segmentation, and the initial image segmentation model is obtained by adding a regular constraint branch parallel to the decoder branch of the U-Net frame at the central layer of the U-Net frame, so The regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework; the organ image data to be segmented is input into the image segmentation model, to segment the organ image data to be segmented mark. By placing additional constraints on the encoder weights in the U‑Net network, more robust features can be extracted from the image data.

Figure 202010544386

Description

基于深度学习的图像分割方法、装置、设备和存储介质Image segmentation method, device, device and storage medium based on deep learning

技术领域technical field

本发明实施例涉及图像处理技术领域,尤其涉及基于深度学习的图像分割方法、装置、终端设备和存储介质。Embodiments of the present invention relate to the technical field of image processing, and in particular, to a deep learning-based image segmentation method, apparatus, terminal device, and storage medium.

背景技术Background technique

近年来,通过无创的成像技术能够便捷获取脑部影像数据,并经影像学分析,可对大脑的解剖结构和内部病变进行定量分析,并作为疾病诊断和治疗的有效依据。核磁共振成像(Magnetic Resonance Imaging,MRI)技术是进行脑部影像分析的有效技术手段。脑部核磁共振图像各解剖结构(例如:灰质、白质、脑脊液等)的自动分割是脑部各组织区域进行定量分析的基础,同时也是病灶分析、三维可视化、手术导航的关键基础步骤。因此,研究精确、快速、高效的脑结构自动分割方法意义重大。In recent years, non-invasive imaging technology can easily obtain brain image data, and through imaging analysis, the anatomical structure and internal lesions of the brain can be quantitatively analyzed, which can be used as an effective basis for disease diagnosis and treatment. Magnetic resonance imaging (Magnetic Resonance Imaging, MRI) technology is an effective technical means for brain image analysis. The automatic segmentation of various anatomical structures (such as gray matter, white matter, cerebrospinal fluid, etc.) in brain MRI images is the basis for quantitative analysis of various brain tissue regions, and is also a key basic step for lesion analysis, 3D visualization, and surgical navigation. Therefore, it is of great significance to study accurate, fast and efficient automatic segmentation methods of brain structures.

脑部的核磁共振图像能够显示大脑内部复杂的解剖结构。由于图像存在一定的边界模糊、低对比度、低空间分辨率、部分容积效应,以及各组织结构形态差异巨大等问题,使得脑MRI图像的结构分割难度大幅增加。MRI images of the brain can show the complex anatomy inside the brain. Due to certain problems such as blurred boundaries, low contrast, low spatial resolution, partial volume effect, and huge differences in the structure and morphology of various tissues, the difficulty of structural segmentation of brain MRI images is greatly increased.

传统的脑结构分割方法主要以手动分割和半自动分割方法为主。手动分割方法费时耗力、过程枯燥、成本昂贵,分割结果严重受到个人经验的影响,且分割结果很难复现。半自动分割方法主要包括:基于灰度的方法(阈值方法、区域生长、模糊聚类等)、基于概率图谱的方法、基于主动轮廓的方法(测地线方法和水平集方法等),这些方法仍然依赖操作者的先验交互操作,分割精度和效率均难达到实际应用水平。Traditional brain structure segmentation methods are mainly based on manual segmentation and semi-automatic segmentation methods. Manual segmentation methods are time-consuming, tedious, and expensive. The segmentation results are seriously affected by personal experience, and the segmentation results are difficult to reproduce. Semi-automatic segmentation methods mainly include: gray-based methods (threshold method, region growing, fuzzy clustering, etc.), probability map-based methods, active contour-based methods (geodesic method and level set method, etc.), these methods still Relying on the prior interaction of the operator, the segmentation accuracy and efficiency are difficult to reach the practical application level.

目前,在医学图像分割领域,基于深度学习的图像分割方法发展迅速。与传统方法相比,基于深度学习的图像分割方法在分割精度和全自动分割方面极具优势。At present, in the field of medical image segmentation, image segmentation methods based on deep learning have developed rapidly. Compared with traditional methods, deep learning-based image segmentation methods have great advantages in segmentation accuracy and fully automatic segmentation.

但是发明人在基于深度学习的图像分割方法进行脑部核磁共振图像的分割时发现,现有基于深度学习的图像分割方法因为神经网络结构的设计原因,在发生输入信息的有限摄动时,输出结果的准确性会降低,整个分割系统的鲁棒性较差。并且,该现象较为普遍,除脑部MRI图像分割外,在其他脏器和模态的数据分割中,同样存在鲁棒性不足问题。However, the inventor found that when the image segmentation method based on deep learning is used to segment the brain MRI image, the existing image segmentation method based on deep learning is due to the design of the neural network structure. The accuracy of the results will be reduced, and the robustness of the entire segmentation system will be less. Moreover, this phenomenon is relatively common. In addition to brain MRI image segmentation, there is also a problem of insufficient robustness in data segmentation of other organs and modalities.

发明内容SUMMARY OF THE INVENTION

本发明提供了一种基于深度学习的图像分割方法、装置、终端设备和存储介质,以解决现有技术在进行图像中区域划分明显的数据分割时鲁棒性不足的技术问题。The present invention provides an image segmentation method, device, terminal device and storage medium based on deep learning, so as to solve the technical problem of insufficient robustness in the prior art for data segmentation with obvious area division in an image.

第一方面,本发明实施例提供了一种基于深度学习的图像分割方法,包括:In a first aspect, an embodiment of the present invention provides an image segmentation method based on deep learning, including:

基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;A training set is obtained based on the segmentation and labeling operation on the sample image data in the organ image data, the organ image data is obtained by scanning the organ;

生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束;An initial image segmentation model is generated, and the training set is input into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation. The initial image segmentation model is added to the central layer of the U-Net framework with Obtained from the parallel regular constraint branch of the decoder branch of the U-Net framework, and the regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework;

将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。The organ image data to be segmented is input into the image segmentation model, so as to perform segmentation marking on the organ image data to be segmented.

第二方面,本发明实施例还提供了一种基于深度学习的图像分割装置,包括:In a second aspect, an embodiment of the present invention further provides an apparatus for image segmentation based on deep learning, including:

训练集生成单元,用于基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;a training set generation unit, configured to obtain a training set based on the segmentation and labeling operation on the sample image data in the organ image data, the organ image data is obtained by scanning the organ;

模型训练单元,用于生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束;A model training unit for generating an initial image segmentation model, and inputting the training set into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation, the initial image segmentation model in U-Net The central layer of the framework is obtained by adding a regular constraint branch parallel to the decoder branch of the U-Net framework, and the regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework;

分割标记单元,用于将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。The segmentation marking unit is configured to input the organ image data to be segmented into the image segmentation model, so as to perform segmentation and marking on the organ image data to be segmented.

第三方面,本发明实施例还提供了一种终端设备,包括:In a third aspect, an embodiment of the present invention further provides a terminal device, including:

一个或多个处理器;one or more processors;

存储器,用于存储一个或多个程序;memory for storing one or more programs;

当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的基于深度学习的图像分割方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the deep learning-based image segmentation method as described in the first aspect.

第四方面,本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面所述的基于深度学习的图像分割方法。In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the deep learning-based image segmentation method described in the first aspect.

上述基于深度学习的图像分割方法、装置、终端设备和存储介质,基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束;将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。通过在U-Net框架的中心层添加与解码器分支并行的正则约束分支,正则约束分支与U-Net框架的编码器结合为变分自编码器,整体模型中变分自编码器与U-Net框架共享编码器的权重,在变分自编码器的损失函数的作用下,能够对U-Net网络中的编码器权重进行额外的约束,从而使得整体模型可以提取到图像数据中更加鲁棒的特征。The above-mentioned deep learning-based image segmentation method, device, terminal device and storage medium obtain a training set based on a segmentation and labeling operation on sample image data in the organ image data, which is obtained by scanning the organ; generating The initial image segmentation model, the training set is input into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation, the initial image segmentation model is added in the central layer of the U-Net framework and all The decoder branch of the U-Net framework is obtained from the parallel regular constraint branch, and the regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework; the organ image data to be segmented is input into the An image segmentation model is used to segment and mark the organ image data to be segmented. By adding a regular constraint branch in parallel with the decoder branch in the central layer of the U-Net framework, the regular constraint branch and the encoder of the U-Net framework are combined into a variational autoencoder. In the overall model, the variational autoencoder and U- Net framework shares the weights of the encoder, and under the action of the loss function of the variational autoencoder, additional constraints can be placed on the encoder weights in the U-Net network, so that the overall model can be extracted into image data more robustly Characteristics.

附图说明Description of drawings

图1为本发明实施例一提供的一种基于深度学习的图像分割方法的流程图;1 is a flowchart of a deep learning-based image segmentation method provided in Embodiment 1 of the present invention;

图2为脑部MRI图像数据的示意图;FIG. 2 is a schematic diagram of brain MRI image data;

图3为脑部MRI图像数据的人工分割标记的示意图;3 is a schematic diagram of manual segmentation and marking of brain MRI image data;

图4为本方案图像分割模型的结构示意图;Fig. 4 is the structural schematic diagram of the image segmentation model of this scheme;

图5为本方案中心设置的多尺度结构的示意图;Figure 5 is a schematic diagram of the multi-scale structure set up in the center of the scheme;

图6为基于本方案对MRBrainS18数据集中的脑部MRI图像数据进行分割的结果;Figure 6 is the result of segmenting the brain MRI image data in the MRBrainS18 dataset based on this scheme;

图7为基于本方案对IBSR中的脑部MRI图像数据进行分割的结果;Fig. 7 is the result of segmentation of brain MRI image data in IBSR based on this scheme;

图8为本发明实施例二提供的一种基于深度学习的图像分割装置的结构示意图;8 is a schematic structural diagram of an apparatus for image segmentation based on deep learning according to Embodiment 2 of the present invention;

图9为本发明实施例三提供的一种终端设备的结构示意图。FIG. 9 is a schematic structural diagram of a terminal device according to Embodiment 3 of the present invention.

具体实施方式Detailed ways

下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are used to explain the present invention, but not to limit the present invention. In addition, it should be noted that, for the convenience of description, the drawings only show some but not all structures related to the present invention.

需要注意的是,由于篇幅所限,本申请说明书没有穷举所有可选的实施方式,本领域技术人员在阅读本申请说明书后,应该能够想到,只要技术特征不互相矛盾,那么技术特征的任意组合均可以构成可选的实施方式。It should be noted that, due to space limitations, the description of this application does not exhaustively list all optional implementations. After reading the description of this application, those skilled in the art should be able to imagine that as long as the technical features are not contradictory, any Combinations can constitute alternative embodiments.

例如,在实施例一的一个实施方式中,记载了一个技术特征:对脏器图像数据进行预处理主要是进行灰度密度和/或灰度范围相关的预处理,以获得灰度密度和/或灰度范围一致的脏器图像数据,在实施例一的另一个实施方式中,记载了另一个技术特征:针对脑部MRI图像数据的内容特征,所述编码器的下采样的次数为3次。由于以上两个技术特征不互相矛盾,本领域技术人员在阅读本申请说明书后,应该能够想到,同时具有这两个特征的实施方式也是一种可选的实施方式,即对脏器图像进行灰度密度和灰度范围相关的预处理之后,在训练过程中进行3次下采样。For example, in an implementation of Embodiment 1, a technical feature is described: the preprocessing of organ image data is mainly to perform preprocessing related to grayscale density and/or grayscale range to obtain grayscale density and/or grayscale range. Or organ image data with the same grayscale range, in another implementation of the first embodiment, another technical feature is recorded: for the content feature of the brain MRI image data, the number of downsampling times of the encoder is 3 Second-rate. Since the above two technical features are not contradictory to each other, those skilled in the art should be able to think after reading the specification of this application that the embodiment with both features is also an optional embodiment, that is, graying out the organ image After preprocessing related to intensity density and grayscale range, downsampling is performed 3 times during training.

另外需要注意的是,本方案的实施形态也不是实施例一中记载的所有技术特征的集合,其中某些技术特征的描述是为了方案的优化实现,实施例一中记载的若干技术特征的组合如果可以实现本方案的设计初衷,其即可作为一种未独立的实施方式,当然也可以作为一种具体的产品形态。In addition, it should be noted that the implementation form of this solution is not a collection of all the technical features described in the first embodiment, and the description of some technical features is for the optimal realization of the solution, and the combination of several technical features described in the first embodiment If the original design intention of this solution can be realized, it can be used as a non-independent implementation, and of course it can also be used as a specific product form.

下面对各实施例进行详细说明。Each embodiment will be described in detail below.

实施例一Example 1

图1为本发明实施例一提供的一种基于深度学习的图像分割方法的流程图。实施例中提供的基于深度学习的图像分割方法可以由用于图像分割的各种操作设备执行,该操作设备可以通过软件和/或硬件的方式实现,该操作设备可以是两个或多个物理实体构成,也可以是一个物理实体构成FIG. 1 is a flowchart of an image segmentation method based on deep learning according to Embodiment 1 of the present invention. The deep learning-based image segmentation method provided in the embodiments can be performed by various operating devices for image segmentation, the operating devices can be implemented by means of software and/or hardware, and the operating devices can be two or more physical devices. Entity composition, which can also be a physical entity composition

具体的,参考图1,该基于深度学习的图像分割方法具体包括:Specifically, referring to FIG. 1 , the deep learning-based image segmentation method specifically includes:

步骤S101:基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到。Step S101 : obtaining a training set based on a segmentation and labeling operation on sample image data in the organ image data obtained by scanning the organ.

本方案中所适用的脏器图像数据主要是同类图像中组织结构对应的区域分布相似的图像,例如眼底图像、脑部MRI图像数据,在本方案中,以脑部MRI图像数据为示例进行说明。通过核磁共振扫描仪对多人的脑部进行扫描得到脑部MRI图像数据,在具体生成脑部MRI图像数据时,可以有多种数据模式要求,例如T1加权成像(如图2所示)、T1lR加权成像、T2加权成像等。对于不同数据模式要求下的成像,在进行图像预处理时,需要进行额外的配准操作,在方案中,为提高分割效率和降低分割成本,仅使用T1加权成像数据。当然,在数据分割处理能力足够的情况下,也可以采用多数据模式的综合处理。The organ image data applicable in this scheme are mainly images of the same type of images with similar regional distribution corresponding to the tissue structure, such as fundus images and brain MRI image data. In this scheme, the brain MRI image data is used as an example for illustration. . The brain MRI image data is obtained by scanning the brains of many people with an MRI scanner. When generating brain MRI image data, there can be various data mode requirements, such as T1-weighted imaging (as shown in Figure 2), T1lR-weighted imaging, T2-weighted imaging, etc. For imaging under different data mode requirements, additional registration operations are required during image preprocessing. In the scheme, only T1-weighted imaging data is used to improve segmentation efficiency and reduce segmentation costs. Of course, in the case where the data division processing capability is sufficient, the comprehensive processing of multiple data modes can also be adopted.

在仅使用T1加权成像数据不需要配准操作的情况下,对脏器图像数据进行预处理主要是进行灰度密度和/或灰度范围相关的预处理,以获得灰度密度和/或灰度范围一致的脏器图像数据。脑部MRI图像数据在成像过程中,受线圈等因素影响,最终采集的脑部MRI图像数据会呈现无规律的、缓慢的灰度密度变化,从而造成对数字图像分割和量化分析的干扰。在本方案中通过对脑部MRI图像数据进行偏差场校正得到灰度密度一致的脑部MRI图像数据,具体可以采用N4 Bias Field Correction方法进行偏差场校正。此外,由于核磁共振成像获得的每个图像切片的图像灰度范围不一致,需要进行灰度标准化操作,具体可以通过如下方法将图像灰度分布归一化至(0,1):In the case of using only T1-weighted imaging data and no registration operation required, the preprocessing of organ image data is mainly to perform preprocessing related to grayscale density and/or grayscale range to obtain grayscale density and/or grayscale Organ image data with the same degree range. During the imaging process of brain MRI image data, affected by factors such as coils, the final collected brain MRI image data will show irregular and slow gray density changes, which will interfere with digital image segmentation and quantitative analysis. In this solution, brain MRI image data with consistent grayscale density is obtained by performing bias field correction on brain MRI image data. Specifically, the N4 Bias Field Correction method can be used to perform bias field correction. In addition, since the grayscale range of each image slice obtained by MRI is inconsistent, a grayscale normalization operation is required. Specifically, the grayscale distribution of the image can be normalized to (0,1) by the following method:

I=(I-Imin)/(Imax-Imin)I=(II min )/(I max -I min )

其中,I表示当前序列图像的灰度矩阵,Imin,Imax分别表示当前图像灰度矩阵的最小值和最大值。Wherein, I represents the grayscale matrix of the current sequence image, and Imin and Imax represent the minimum and maximum values of the grayscale matrix of the current image, respectively.

在预处理得到的脏器图像数据的基础上,选择多个预处理后的脏器图像数据作为样本图像数据接收分割标记操作得到训练集。具体的标记操作由人工操作完成,标记内容包括分割的目标区域,如果是脑部MRI图像数据,目标区域如脑灰质(Cortical GrayMatter,GM)、脑白质(White Matter)、脑脊液(Cerebrospinal Fluid),通过对分割图像的人工标记,能够获得对应于该脏器图像数据的金标准图像(Ground Truth)。如图3所示即为金标准图像,其中,10为脑脊液区域,11为脑灰质区域,12为脑白质区域,脑部结构区域之外的黑色区域为背景区域。On the basis of the organ image data obtained by preprocessing, a plurality of preprocessed organ image data are selected as sample image data to receive segmentation and marking operations to obtain a training set. The specific labeling operation is done by manual operation, and the labeling content includes the target area for segmentation. If it is brain MRI image data, the target area such as Cortical Gray Matter (GM), White Matter (White Matter), Cerebrospinal Fluid (Cerebrospinal Fluid), By manually labeling the segmented images, a gold standard image (Ground Truth) corresponding to the organ image data can be obtained. Figure 3 is the gold standard image, where 10 is the cerebrospinal fluid area, 11 is the gray matter area, 12 is the white matter area, and the black area outside the brain structure area is the background area.

步骤S102:生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束。Step S102: Generate an initial image segmentation model, input the training set into the initial image segmentation model for model training, to obtain an image segmentation model for image structure segmentation, and the initial image segmentation model is in the center of the U-Net frame. The layer is obtained by adding a canonical constraint branch parallel to the decoder branch of the U-Net framework, and the canonical constraint branch is used to perform canonical constraints on the encoder branch of the U-Net framework.

如图4所示,本方案中的图像分割模型的主体为U-Net框架,即图4中左边两个分支组成的U型网络,U型网络中的左侧分支为编码器分支,右侧分支为解码器分支,最下方是U型网络的中心层。本方案中,整体神经网络的架构在U型网络的基础上,进一步在U型网络的中心层添加正则约束分支(图4中最右侧分支),正则约束分支与解码器分支并行,正则约束分支用于对编码器分支进行正则约束。As shown in Figure 4, the main body of the image segmentation model in this scheme is the U-Net framework, that is, a U-shaped network composed of two branches on the left in Figure 4. The left branch in the U-shaped network is the encoder branch, and the right branch is the encoder branch. The branch is the decoder branch, and the bottom is the central layer of the U-shaped network. In this scheme, the architecture of the overall neural network is based on the U-shaped network, and a regular constraint branch (the rightmost branch in Figure 4) is further added to the central layer of the U-shaped network. The regular constraint branch is parallel to the decoder branch, and the regular constraint Branches are used to regularize the encoder branches.

在本方案构建的图像分割模型中,可以视为U型网络与变分自编码器的综合实现。其中编码器分支的网络参数为U型网络与变分自编码器所共享,即编码器分支与解码器分支组成完整的U型网络,编码器分支受解码器分支的损失函数的约束,同时,编码器分支与正则约束分支组成完整的变分自编码器,编码器分支受变分自编码器的重建误差的约束。由此,编码器分支受到两个方面的约束,其中正则约束分支通过重建误差,能够捕获图像的特征。并且,正则约束分支仅在图像分割模型的训练过程中起作用,在模型训练完成后对脏器图像数据进行分割时,正则约束分支不参与分割标记的相关数据处理过程。正则约束分支相当于在图像分割模型中加入了图像生成对抗的思想,促使编码器分支提取更有效的信息。In the image segmentation model constructed by this scheme, it can be regarded as a comprehensive realization of U-shaped network and variational autoencoder. The network parameters of the encoder branch are shared by the U-shaped network and the variational autoencoder, that is, the encoder branch and the decoder branch form a complete U-shaped network, and the encoder branch is constrained by the loss function of the decoder branch. The encoder branch and the regular constraint branch form a complete variational autoencoder, and the encoder branch is constrained by the reconstruction error of the variational autoencoder. Thus, the encoder branch is constrained by two aspects, where the regularization constraint branch is able to capture the features of the image through reconstruction errors. In addition, the regular constraint branch only works during the training process of the image segmentation model. When the organ image data is segmented after the model training is completed, the regular constraint branch does not participate in the data processing process of the segmentation mark. The regular constraint branch is equivalent to adding the idea of image generation confrontation to the image segmentation model, which prompts the encoder branch to extract more effective information.

整体而言,由于U型网络和变分自编码器共享编码器分支的权重,在正则约束分支的损失函数的作用下,能够对编码器分支的权重进行额外的约束,从而使得编码器分支提取更加鲁棒的特征,进而使得整体模型在训练完成之后对脏器图像数据进行分割时具备更好的鲁棒性。Overall, since the U-shaped network and the variational autoencoder share the weight of the encoder branch, under the action of the loss function of the regular constraint branch, additional constraints can be placed on the weight of the encoder branch, so that the encoder branch can extract More robust features, so that the overall model has better robustness when segmenting organ image data after training.

在上述图像分割模型整体架构的基础上,步骤S102中图像分割模型的训练过程可以进一步通过步骤S1021-S1026实现。On the basis of the overall architecture of the above image segmentation model, the training process of the image segmentation model in step S102 can be further implemented through steps S1021-S1026.

步骤S1021:生成初始图像分割模型。Step S1021: Generate an initial image segmentation model.

在初始化图像分割模型时,基于前文所述的整体模型架构,适应于不同类型的脏器图像数据,可以进一步初始化图像分割模型的参数细节。When initializing the image segmentation model, based on the overall model architecture described above and adapting to different types of organ image data, the parameter details of the image segmentation model can be further initialized.

对于正则约束分支的损失函数,可以通过后验概率分布qφ(z|x)与似然函数pθ(z|x)之间的KL散度进行计算,其中x为输入图像,z为所述正则约束分支和所述编码器分支组成的变分自编码器中的隐变量,φ,θ分别对应编码器分支和解码器分支中网络待学习的参数。For the loss function of the regular constraint branch, it can be calculated by the KL divergence between the posterior probability distribution q φ (z|x) and the likelihood function p θ (z|x), where x is the input image and z is the The hidden variables in the variational autoencoder composed of the regular constraint branch and the encoder branch, φ and θ correspond to the parameters to be learned by the network in the encoder branch and the decoder branch, respectively.

具体来说,损失函数LVAE(θ,φ)通过以下公式计算:Specifically, the loss function L VAE (θ, φ) is calculated by the following formula:

Figure BDA0002540155210000073
Figure BDA0002540155210000073

其中,

Figure BDA0002540155210000072
表示重建均方误差,DKL(qφ(z|x)||pθ(z))表示先验分布pθ(z)与后验概率分布qφ(z|x)的反向KL散度,在模型中,希望两个分布尽可能接近,所以用该约束。由于变分自编码器和U型网络共享编码器分支的权重,在变分自编码器的损失函数(即正则约束分支的损失函数)的作用下,能够对编码器分支的权重进行额外的约束,使得编码器分支提取更加鲁棒的特征,也即提高整体模型的鲁棒性。in,
Figure BDA0002540155210000072
represents the reconstruction mean square error, D KL (q φ (z|x)||p θ (z)) represents the inverse KL dispersion of the prior distribution p θ (z) and the posterior probability distribution q φ (z|x) Degree, in the model, we want the two distributions to be as close as possible, so use this constraint. Since the variational autoencoder and the U-shaped network share the weight of the encoder branch, under the action of the loss function of the variational autoencoder (that is, the loss function of the regular constraint branch), additional constraints can be placed on the weight of the encoder branch. , so that the encoder branch extracts more robust features, that is, improves the robustness of the overall model.

此外,所述中心层与所述正则约束分支之间添加有高斯噪声。正则约束分支在隐藏层加入高斯噪声,使得模型的抗干扰性能提升,提高泛化性能,对样本不均衡问题起到一定缓解作用。隐藏层是指网络的低维特征表达,即网络的中心部分,如图4所示,高斯噪声具体添加于网络中心与正则约束分支之间,从数学表达而言,高斯噪声通过图4中的N(μ,σ2)(即正态分布)约束来实现。In addition, Gaussian noise is added between the central layer and the regular constraint branch. The regular constraint branch adds Gaussian noise to the hidden layer, which improves the anti-interference performance of the model, improves the generalization performance, and alleviates the problem of unbalanced samples. The hidden layer refers to the low-dimensional feature expression of the network, that is, the central part of the network. As shown in Figure 4, Gaussian noise is added between the center of the network and the regular constraint branch. Mathematically, the Gaussian noise passes through the N(μ,σ 2 ) (ie, normal distribution) constraints are implemented.

在具体的实现过程中,所述解码器的损失函数可以为二值交叉熵损失。即:In a specific implementation process, the loss function of the decoder may be a binary cross-entropy loss. which is:

Figure BDA0002540155210000071
Figure BDA0002540155210000071

式中,y表示Ground Truth,pi表示逻辑层的输出值。In the formula, y represents the Ground Truth, and p i represents the output value of the logic layer.

必要时,还可以在中心层设置多尺度结构。多尺度结构即空洞卷积模块(AtrousConv Module),其结构如图5所示。空洞卷积能够增加所提取特征的感受野,提升算法对不同尺度下物体的分割性能。本方案中,通过不同空洞卷积步长(r=1,r=2,r=5,r=7)的设置,可以获取不同尺度下的物体特征。When necessary, a multi-scale structure can also be set in the central layer. The multi-scale structure is the atrous convolution module (AtrousConv Module), and its structure is shown in Figure 5. Atrous convolution can increase the receptive field of the extracted features and improve the segmentation performance of the algorithm for objects at different scales. In this scheme, by setting different hole convolution step sizes (r=1, r=2, r=5, r=7), object features at different scales can be obtained.

中心层还可以同时包括Droupout操作,以防止过拟合,增加模型的泛化性能。The central layer can also include a Dropout operation to prevent overfitting and increase the generalization performance of the model.

针对脑部MRI图像数据的内容特征,所述编码器分支的每个卷积层的神经元的数量相同。与大多数U型网络不同,本方案增加了初始层的神经元数量,减少了深层神经元的数量。同时,考虑到深层神经元在内存占用、计算量等方面相对较小,故在网络的编码器分支设置相同的神经元数量。在对脑部MRI图像数据进行分析时发现,与低分辨率图像相比,高分辨率图像包含更加丰富的图像细节信息。与此类似,在特征图中,高分辨率特征图包含更加丰富和复杂的信息,需通过更多的神经元来学习这些特征;在低分辨率特征图中,其包含较少信息,可由较少的神经元来学习其中的有效特征。For the content characteristics of the brain MRI image data, the number of neurons in each convolutional layer of the encoder branch is the same. Unlike most U-shaped networks, this scheme increases the number of neurons in the initial layers and reduces the number of neurons in the deep layers. At the same time, considering that the deep neurons are relatively small in terms of memory occupation and calculation amount, the same number of neurons is set in the encoder branch of the network. When analyzing brain MRI image data, it was found that high-resolution images contain richer image details than low-resolution images. Similarly, in feature maps, high-resolution feature maps contain richer and more complex information, and more neurons are needed to learn these features; in low-resolution feature maps, they contain less information and can be learned by more few neurons to learn effective features.

另外,针对脑部MRI图像数据的内容特征,所述编码器的下采样的次数为3次。深度神经网络随着网络层的加深,网络的参数、训练难度、训练成本等会大幅增加。在脑结构分割任务中,虽然人与人之间的大脑结构存在较大差异,但总体结构特征相对一致,浅层网络已经可以学习到这些显著特征。与深层网络相比,浅层网络梯度回传更加有效,有助于分割精度的提升。当然,针对其他类型的脏器图像数据,可以对图像特征进行具体分析之后设置更合适的下采样次数。In addition, according to the content feature of the brain MRI image data, the number of times of downsampling by the encoder is 3 times. With the deepening of the network layer, the parameters, training difficulty, and training cost of the deep neural network will increase significantly. In the brain structure segmentation task, although there are large differences in brain structure between people, the overall structural features are relatively consistent, and shallow networks can already learn these salient features. Compared with the deep network, the gradient backhaul of the shallow network is more effective, which helps to improve the segmentation accuracy. Of course, for other types of organ image data, a more appropriate number of downsampling times can be set after specific analysis of the image features.

本方案在网络层深度、神经元数量方面作的特殊处理,通过以上两个策略,能够将网络参数减小为传统U型网络的20%,减小计算量和算法运行时间,提升分割精度和速度,对实际应用具有重大意义。The special treatment of this scheme in terms of the depth of the network layer and the number of neurons, through the above two strategies, can reduce the network parameters to 20% of the traditional U-shaped network, reduce the amount of calculation and the running time of the algorithm, improve the segmentation accuracy and speed, which is of great significance for practical applications.

步骤S1022:将所述训练集输入所述初始图像分割模型进行模型训练,得到过渡图像分割模型。Step S1022: Input the training set into the initial image segmentation model for model training to obtain a transition image segmentation model.

通常来说,在初始图像分割模型的基础上需要多次训练才能完成最终图像分割模型的生成。本方案中将每次训练之后得到的图像分割模型定义为过渡分割模型。也就是说,本方案中对初始图像分割模型和过渡图像分割模型的训练过程相同,区别主要在于训练集的更新变化,两个定义只是为方案表述清楚对不同阶段做的名称上的区分。Generally speaking, multiple trainings are required on the basis of the initial image segmentation model to complete the generation of the final image segmentation model. In this scheme, the image segmentation model obtained after each training is defined as a transitional segmentation model. That is to say, the training process of the initial image segmentation model and the transition image segmentation model in this scheme is the same, and the difference is mainly in the update and change of the training set. The two definitions are only for the scheme to express the clear distinction between the names of different stages.

步骤S1023:将未分割标记的脏器图像数据输入所述过渡图像分割模型进行分割标记,对分割标记结果进行修正确定。Step S1023: Input the organ image data that is not segmented and marked into the transition image segmentation model for segmentation and marking, and correct and determine the segmentation and marking result.

在实际的模型训练过程中,基于样本图像数据构造的训练集中的样本数据可能不会太多,为减少人工操作,在本方案中,将未分割标记的脏器图像数据输入过渡图像分割模型进行分割标记,既是对过渡图像分割模型的测试,也是对训练集中样本的增加。具体来说,对分割标记结果进行修正确定有两个结果,第一个结果是确定不用修正,第二个结果是确定需要修正。如果是第一个结果,则大体可以确定过渡图像分割模型通过测试,可以执行步骤S1026;如果是第二个结果,则对分割标记结果进行修正,修正后的分割标记结果可以添加到训练集进一步进行训练,即执行步骤S1024。通过这种修正分割标记结果的方案,通过人工确认和少量修正,就可以快速获得可靠的脏器图像数据的分割标记,快速增加训练集。In the actual model training process, the sample data in the training set constructed based on the sample image data may not be too much. Segmentation markers are both a test of the transitional image segmentation model and an addition to the samples in the training set. Specifically, there are two results for the correction and determination of the segmentation marking result, the first result is that it is determined that no correction is required, and the second result is that it is determined that correction is required. If it is the first result, it can generally be determined that the transition image segmentation model has passed the test, and step S1026 can be executed; if it is the second result, the segmentation and labeling results are corrected, and the corrected segmentation and labeling results can be added to the training set for further For training, step S1024 is executed. Through this scheme of correcting segmentation and labeling results, through manual confirmation and a small amount of correction, reliable segmentation and labeling of organ image data can be quickly obtained, and the training set can be rapidly increased.

步骤S1024:将修正后的分割标记结果添加更新到所述训练集。Step S1024: Add and update the corrected segmentation and labeling result to the training set.

步骤S1025:根据添加更新后的训练集对所述过渡图像分割模型进行训练更新。Step S1025: Train and update the transition image segmentation model according to the added and updated training set.

训练集的更新以及过渡图像分割模型的更新与前述的训练集构建以及初始图像数据分割模型的训练过程相似,在此不做重复说明。The updating of the training set and the updating of the transitional image segmentation model are similar to the aforementioned training process of the construction of the training set and the training of the initial image data segmentation model, and will not be repeated here.

步骤S1026:当所述分割标记结果确定准确,将当前过渡图像分割模型作为图像分割模型。Step S1026: When the segmentation mark result is determined to be accurate, the current transition image segmentation model is used as the image segmentation model.

需要说明的是,步骤S1021-步骤S1026是对模型训练过程整体描述,步骤的编号不表示对执行顺序的绝对约束。各步骤具体可以根据模型训练的执行进度进行调整;例如在步骤S1023中,如果分割标记结果确定准确无需修正,可以视为当前的过渡图像分割模型的测试结果已经达到预设的性能测试标准,并将该过渡图像分割模型作为最终的图像分割模型,即在步骤S1023之后执行步骤S1026。各步骤具体还可能存在多个步骤间的循环;例如在步骤S1023-S1025,可能出现的情况是连续多次将未分割标记的脏器图像数据输入过渡图像分割模型,得到的分割标记结果均需要进行修正,则在该情况下需要循环执行步骤1023-S1025以完成每张未分割标记的脏器图像对应的分割标记、分割标记结果修正、训练集更新和过渡图像分割模型的更新。如果模型训练过程的进展顺利,基于步骤S1023中的修正确定判断操作,其中某些步骤甚至可以不用执行;例如在步骤S1022之后得到的过渡图像分割模型,经过步骤S1023中未分割标记的脏器图像数据的分割标记结果的修正确定判断无需修正,则可以直接执行步骤S1026而无需执行步骤S1024和步骤S1025。It should be noted that steps S1021 to S1026 are the overall description of the model training process, and the numbers of the steps do not represent absolute constraints on the execution order. Each step can be specifically adjusted according to the execution progress of the model training; for example, in step S1023, if the segmentation mark result is determined to be accurate and does not require correction, it can be considered that the test result of the current transition image segmentation model has reached the preset performance test standard, and The transition image segmentation model is used as the final image segmentation model, that is, step S1026 is performed after step S1023. There may also be cycles between multiple steps in each step; for example, in steps S1023-S1025, it may happen that the unsegmented and marked organ image data is input into the transition image segmentation model multiple times in a row, and the obtained segmentation and marking results all require In this case, steps 1023-S1025 need to be performed cyclically to complete the segmentation labeling corresponding to each unsegmented and labelled organ image, segmentation labeling result correction, training set update and transition image segmentation model update. If the model training process is progressing smoothly, the judgment operation is determined based on the correction in step S1023, and some of the steps may not even be performed; If it is determined that the correction of the result of the segmentation marking of the data does not require correction, step S1026 may be directly performed without performing steps S1024 and S1025.

此外,在具体执行过程中,为了提高图像分割模型的分割准确性和精度,可以进一步限定在连续若干个未分割图像数据的分割标记结果判断无需修正时才确定分割标记结果准确,并对应确认图像分割模型。In addition, in the specific implementation process, in order to improve the segmentation accuracy and precision of the image segmentation model, it can be further limited that the segmentation labeling result is determined to be accurate only when the segmentation labeling results of several consecutive unsegmented image data do not need to be corrected, and the corresponding confirmation image segmentation model.

步骤S103:将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。Step S103: Input the organ image data to be segmented into the image segmentation model, so as to mark the organ image data to be segmented for segmentation.

在实际应用时,可通过将待分割的脏器图像数据输入到该图像分割模型中,并由图像分割模型自动预测,从而快速获得准确分割结果。图6和图7分别为基于本方案对MRBrainS18数据集中的脑部MRI图像数据和IBSR(Internet Brain SegmentationRepository,国际脑分割数据库)中的脑部MRI图像数据进行分割的结果。其中Image表示待分割的脏器图像数据,Ground Truth表示人工对待分割的脏器图像数据进行分割标记的结果,Predict表示图像分割模型对待分割的脏器图像数据的分割标记结果。In practical application, accurate segmentation results can be obtained quickly by inputting the organ image data to be segmented into the image segmentation model and automatically predicted by the image segmentation model. Figures 6 and 7 respectively show the results of segmenting the brain MRI image data in the MRBrainS18 dataset and the brain MRI image data in the IBSR (Internet Brain Segmentation Repository, International Brain Segmentation Database) based on this scheme. Image represents the organ image data to be segmented, Ground Truth represents the result of manually segmenting and labeling the organ image data to be segmented, and Predict represents the segmentation and labeling result of the organ image data to be segmented by the image segmentation model.

上述,基于深度学习的图像分割方法、装置、终端设备和存储介质,基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束;将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。通过在U-Net框架的中心层添加与解码器分支并行的正则约束分支,正则约束分支与U-Net框架的编码器结合为变分自编码器,整体模型中变分自编码器与U-Net框架共享编码器的权重,在变分自编码器的损失函数的作用下,能够对U-Net网络中的编码器权重进行额外的约束,从而使得整体模型可以提取到图像数据中更加鲁棒的特征。In the above, the deep learning-based image segmentation method, device, terminal device and storage medium obtain a training set based on the segmentation and marking operation of sample image data in the organ image data, and the organ image data is obtained by scanning the organ; An initial image segmentation model is generated, and the training set is input into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation. The initial image segmentation model is added to the central layer of the U-Net framework with The decoder branch of the U-Net framework is obtained from the parallel regular constraint branch, and the regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework; the organ image data to be segmented is input into the The image segmentation model is used to segment and mark the organ image data to be segmented. By adding a regular constraint branch in parallel with the decoder branch in the central layer of the U-Net framework, the regular constraint branch and the encoder of the U-Net framework are combined into a variational autoencoder. In the overall model, the variational autoencoder and U- Net framework shares the weights of the encoder, and under the action of the loss function of the variational autoencoder, additional constraints can be placed on the encoder weights in the U-Net network, so that the overall model can be extracted into image data more robustly Characteristics.

实施例二Embodiment 2

图8为本发明实施例二提供的一种基于深度学习的图像分割装置的结构示意图。参考图8,该基于深度学习的图像分割装置包括:训练集生成单元201、模型训练单元202和分割标记单元203。FIG. 8 is a schematic structural diagram of an apparatus for image segmentation based on deep learning according to Embodiment 2 of the present invention. Referring to FIG. 8 , the deep learning-based image segmentation apparatus includes: a training set generating unit 201 , a model training unit 202 and a segmentation marking unit 203 .

其中,训练集生成单元201,用于基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;模型训练单元202,用于生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束;分割标记单元203,用于将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。Among them, the training set generation unit 201 is used to obtain a training set based on the segmentation and marking operation on the sample image data in the organ image data, and the organ image data is obtained by scanning the organs; the model training unit 202 is used to generate The initial image segmentation model, the training set is input into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation, the initial image segmentation model is added in the central layer of the U-Net framework and all The parallel regular constraint branch of the decoder branch of the U-Net framework is obtained, and the regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework; the segmentation marking unit 203 is used to divide the dirty The organ image data is input into the image segmentation model, so as to perform segmentation marking on the organ image data to be segmented.

在上述实施例的基础上,所述模型训练单元202,包括:On the basis of the above embodiment, the model training unit 202 includes:

模型初始化模块,用于生成初始图像分割模型;The model initialization module is used to generate the initial image segmentation model;

初始训练模块,用于将所述训练集输入所述初始图像分割模型进行模型训练,得到过渡图像分割模型;an initial training module for inputting the training set into the initial image segmentation model for model training to obtain a transitional image segmentation model;

中间测试模块,用于将未分割标记的脏器图像数据输入所述过渡图像分割模型进行分割标记,对分割标记结果进行修正确定;an intermediate test module, used for inputting the unsegmented and marked organ image data into the transition image segmentation model for segmentation and marking, and correcting and determining the segmentation and marking results;

训练集更新模块,用于将修正后的分割标记结果添加更新到所述训练集;A training set update module, for adding and updating the revised segmentation mark result to the training set;

模型更新模块,用于根据添加更新后的训练集对所述过渡图像分割模型进行训练更新;A model updating module, used for training and updating the transition image segmentation model according to the added and updated training set;

模型确定模块,用于当所述分割标记结果确定准确,将当前过渡图像分割模型作为图像分割模型。The model determination module is configured to use the current transition image segmentation model as the image segmentation model when the segmentation marking result is determined to be accurate.

在上述实施例的基础上,所述正则约束分支的损失函数通过后验概率分布qφ(z|x)与似然函数pθ(z|x)之间的KL散度进行计算,其中x为输入图像,z为所述正则约束分支和所述编码器分支组成的变分自编码器中的隐变量,φ,θ分别对应编码器分支和解码器分支中网络待学习的参数。On the basis of the above embodiment, the loss function of the regular constraint branch is calculated by the KL divergence between the posterior probability distribution q φ (z|x) and the likelihood function p θ (z|x), where x is the input image, z is the latent variable in the variational autoencoder composed of the regular constraint branch and the encoder branch, φ and θ correspond to the parameters to be learned by the network in the encoder branch and the decoder branch, respectively.

在上述实施例的基础上,所述损失函数LVAE(θ,φ)通过以下公式计算:On the basis of the above embodiment, the loss function L VAE (θ, φ) is calculated by the following formula:

LVAE(θ,φ)=-Εz~qφ(z|x)logpθ(x|z)+DKL(qφ(z|x)||pθ(z))L VAE (θ,φ)=-Ε z~qφ(z|x) logp θ (x|z)+D KL (q φ (z|x)||p θ (z))

其中,

Figure BDA0002540155210000111
表示重建均方误差,DKL(qφ(z|x)||pθ(z))表示先验分布pθ(z)与后验概率分布qφ(z|x)的反向KL散度。in,
Figure BDA0002540155210000111
represents the reconstruction mean square error, D KL (q φ (z|x)||p θ (z)) represents the inverse KL dispersion of the prior distribution p θ (z) and the posterior probability distribution q φ (z|x) Spend.

在上述实施例的基础上,所述中心层与所述正则约束分支之间添加有高斯噪声。On the basis of the above embodiment, Gaussian noise is added between the central layer and the regular constraint branch.

在上述实施例的基础上,所述解码器的损失函数为二值交叉熵损失。Based on the above embodiment, the loss function of the decoder is a binary cross-entropy loss.

在上述实施例的基础上,所述脏器图像数据为脑部MRI图像数据;On the basis of the above embodiment, the organ image data is brain MRI image data;

对应的,所述训练集生成单元201,包括:Correspondingly, the training set generating unit 201 includes:

图像预处理模块,用于对脏器图像数据进行预处理,以获得灰度密度和/或灰度范围一致的脏器图像数据;The image preprocessing module is used to preprocess the organ image data to obtain organ image data with consistent grayscale density and/or grayscale range;

分割标记模块,用于选择多个预处理后的脏器图像数据作为样本图像数据接收分割标记操作得到训练集。The segmentation and marking module is used to select a plurality of preprocessed organ image data as sample image data and receive a segmentation and marking operation to obtain a training set.

在上述实施例的基础上,所述灰度密度通过偏差场校正进行预处理;所述灰度范围通过灰度标准化进行预处理。On the basis of the above embodiment, the grayscale density is preprocessed by bias field correction; the grayscale range is preprocessed by grayscale normalization.

在上述实施例的基础上,所述中心层包括多尺度结构。On the basis of the above embodiments, the central layer includes a multi-scale structure.

在上述实施例的基础上,所述中心层包括Droupout操作。On the basis of the above embodiment, the central layer includes a Dropout operation.

在上述实施例的基础上,所述编码器的每个卷积层的神经元的数量相同。On the basis of the above embodiment, the number of neurons in each convolutional layer of the encoder is the same.

在上述实施例的基础上,所述编码器的下采样的次数为3次。On the basis of the above embodiment, the number of downsampling times of the encoder is 3 times.

本发明实施例提供的基于深度学习的图像分割装置包含在活体检测设备中,且可用于执行上述实施例一中提供的任一基于深度学习的图像分割方法,具备相应的功能和有益效果。The deep learning-based image segmentation apparatus provided in the embodiment of the present invention is included in a living body detection device, and can be used to execute any of the deep learning-based image segmentation methods provided in the above-mentioned first embodiment, and has corresponding functions and beneficial effects.

实施例三Embodiment 3

图9为本发明实施例三提供的一种终端设备的结构示意图,该终端设备是前文所述基于深度学习的图像分割设备的一种具体的硬件呈现方案。如图9所示,该终端设备包括处理器310、存储器320、输入装置330、输出装置340以及通信装置350;终端设备中处理器310的数量可以是一个或多个,图9中以一个处理器310为例;终端设备中的处理器310、存储器320、输入装置330、输出装置340以及通信装置350可以通过总线或其他方式连接,图9中以通过总线连接为例。FIG. 9 is a schematic structural diagram of a terminal device according to Embodiment 3 of the present invention, where the terminal device is a specific hardware presentation solution of the aforementioned deep learning-based image segmentation device. As shown in FIG. 9 , the terminal device includes a processor 310, a memory 320, an input device 330, an output device 340, and a communication device 350; the number of processors 310 in the terminal device may be one or more, and in FIG. Take the processor 310 as an example; the processor 310, the memory 320, the input device 330, the output device 340, and the communication device 350 in the terminal device can be connected by a bus or in other ways.

存储器320作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的基于深度学习的图像分割方法对应的程序指令/模块(例如,基于深度学习的图像分割装置中的训练集生成单元201、模型训练单元202和分割标记单元203)。处理器310通过运行存储在存储器320中的软件程序、指令以及模块,从而执行终端设备的各种功能应用以及数据处理,即实现上述的基于深度学习的图像分割方法。As a computer-readable storage medium, the memory 320 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the deep learning-based image segmentation method in the embodiments of the present invention (for example, deep learning-based image segmentation methods). The training set generation unit 201, the model training unit 202 and the segmentation labeling unit 203) in the image segmentation device. The processor 310 executes various functional applications and data processing of the terminal device by running the software programs, instructions and modules stored in the memory 320 , that is, to implement the above-mentioned deep learning-based image segmentation method.

存储器320可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器320可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器320可进一步包括相对于处理器310远程设置的存储器,这些远程存储器可以通过网络连接至终端设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 320 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Additionally, memory 320 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some instances, the memory 320 may further include memory located remotely from the processor 310, and these remote memories may be connected to the terminal device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.

输入装置330可用于接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的键信号输入。输出装置340可包括显示屏等显示设备。The input device 330 may be used to receive input numerical or character information, and generate key signal input related to user setting and function control of the terminal device. The output device 340 may include a display device such as a display screen.

上述终端设备包含基于深度学习的图像分割装置,可以用于执行任意基于深度学习的图像分割方法,具备相应的功能和有益效果。The above-mentioned terminal equipment includes an image segmentation device based on deep learning, which can be used to execute any image segmentation method based on deep learning, and has corresponding functions and beneficial effects.

实施例四Embodiment 4

本发明实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行本申请任意实施例中提供的基于深度学习的图像分割方法中的相关操作,且具备相应的功能和有益效果。Embodiments of the present invention further provide a storage medium containing computer-executable instructions, when executed by a computer processor, the computer-executable instructions are used to execute the image segmentation method based on deep learning provided in any embodiment of the present application. related operations, and have corresponding functions and beneficial effects.

本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product.

因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram. These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flows of the flowcharts and/or the block or blocks of the block diagrams. These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。存储器是计算机可读介质的示例。In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory in the form of, for example, read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.

计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, excludes transitory computer-readable media, such as modulated data signals and carrier waves.

还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements includes not only those elements, but also Other elements not expressly listed or inherent to such a process, method, article of manufacture or apparatus are also included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article of manufacture or apparatus that includes the element.

注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。Note that the above are only preferred embodiments of the present invention and applied technical principles. Those skilled in the art will understand that the present invention is not limited to the specific embodiments described herein, and various obvious changes, readjustments and substitutions can be made by those skilled in the art without departing from the protection scope of the present invention. Therefore, although the present invention has been described in detail through the above embodiments, the present invention is not limited to the above embodiments, and can also include more other equivalent embodiments without departing from the concept of the present invention. The scope is determined by the scope of the appended claims.

Claims (10)

1.基于深度学习的图像分割方法,其特征在于,包括:1. An image segmentation method based on deep learning, characterized in that, comprising: 基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;A training set is obtained based on the segmentation and labeling operation on the sample image data in the organ image data, the organ image data is obtained by scanning the organ; 生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束;An initial image segmentation model is generated, and the training set is input into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation. The initial image segmentation model is added to the central layer of the U-Net framework with Obtained from the parallel regular constraint branch of the decoder branch of the U-Net framework, and the regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework; 将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。The organ image data to be segmented is input into the image segmentation model, so as to perform segmentation marking on the organ image data to be segmented. 2.根据权利要求1所述的方法,其特征在于,所述生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,包括:2. The method according to claim 1, wherein, in the step of generating an initial image segmentation model, the training set is input into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation ,include: 生成初始图像分割模型;Generate an initial image segmentation model; 将所述训练集输入所述初始图像分割模型进行模型训练,得到过渡图像分割模型;Inputting the training set into the initial image segmentation model for model training to obtain a transition image segmentation model; 将未分割标记的脏器图像数据输入所述过渡图像分割模型进行分割标记,对分割标记结果进行修正确定;Inputting the unsegmented and marked organ image data into the transition image segmentation model for segmentation and marking, and correcting and determining the segmentation and marking results; 将修正后的分割标记结果添加更新到所述训练集;adding and updating the revised segmentation and labeling results to the training set; 根据添加更新后的训练集对所述过渡图像分割模型进行训练更新;The transition image segmentation model is trained and updated according to the added and updated training set; 当所述分割标记结果确定准确,将当前过渡图像分割模型作为图像分割模型。When the segmentation marking result is determined to be accurate, the current transition image segmentation model is used as the image segmentation model. 3.根据权利要求1所述的方法,其特征在于,所述正则约束分支的损失函数通过后验概率分布qφ(z|x)与似然函数pθ(z|x)之间的KL散度进行计算,其中x为输入图像,z为所述正则约束分支和所述编码器分支组成的变分自编码器中的隐变量,z服从标准正态分布,φ,θ分别对应编码器分支和解码器分支中网络待学习的参数。3. The method according to claim 1, wherein the loss function of the regular constraint branch passes the KL between the posterior probability distribution q φ (z|x) and the likelihood function p θ (z|x) Divergence is calculated, where x is the input image, z is the latent variable in the variational autoencoder composed of the regular constraint branch and the encoder branch, z obeys the standard normal distribution, φ, θ correspond to the encoder respectively The parameters to be learned by the network in the branch and decoder branch. 4.根据权利要求3所述的方法,其特征在于,所述损失函数LVAE(θ,φ)通过以下公式计算:4. The method according to claim 3, wherein the loss function L VAE (θ, φ) is calculated by the following formula:
Figure FDA0002540155200000011
Figure FDA0002540155200000011
其中,
Figure FDA0002540155200000012
表示重建均方误差,DKL(qφ(z|x)||pθ(z))表示先验分布pθ(z)与后验概率分布qφ(z|x)的反向KL散度。
in,
Figure FDA0002540155200000012
represents the reconstruction mean square error, D KL (q φ (z|x)||p θ (z)) represents the inverse KL dispersion of the prior distribution p θ (z) and the posterior probability distribution q φ (z|x) Spend.
5.根据权利要求1所述的方法,其特征在于,所述解码器的损失函数为二值交叉熵损失。5. The method of claim 1, wherein the loss function of the decoder is a binary cross-entropy loss. 6.根据权利要求1所述的方法,其特征在于,所述脏器图像数据为脑部MRI图像数据;6. The method according to claim 1, wherein the organ image data is brain MRI image data; 对应的,所述基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,包括:Correspondingly, the training set obtained based on the segmentation and marking operation on the sample image data in the organ image data includes: 对脏器图像数据进行预处理,以获得灰度密度和/或灰度范围一致的脏器图像数据;Preprocess the organ image data to obtain organ image data with consistent grayscale density and/or grayscale range; 选择多个预处理后的脏器图像数据作为样本图像数据接收分割标记操作得到训练集。Select multiple pre-processed organ image data as sample image data to receive segmentation and labeling operations to obtain a training set. 7.根据权利要求6所述的方法,其特征在于,所述灰度密度通过偏差场校正进行预处理;所述灰度范围通过灰度标准化进行预处理。7. The method according to claim 6, wherein the grayscale density is preprocessed by bias field correction; the grayscale range is preprocessed by grayscale normalization. 8.基于深度学习的图像分割装置,其特征在于,包括:8. An image segmentation device based on deep learning, characterized in that, comprising: 训练集生成单元,用于基于对脏器图像数据中的样本图像数据的分割标记操作得到训练集,所述脏器图像数据通过对脏器扫描得到;a training set generation unit, configured to obtain a training set based on the segmentation and labeling operation on the sample image data in the organ image data, the organ image data is obtained by scanning the organ; 模型训练单元,用于生成初始图像分割模型,将所述训练集输入所述初始图像分割模型进行模型训练,以获得用于图像结构分割的图像分割模型,所述初始图像分割模型在U-Net框架的中心层添加与所述U-Net框架的解码器分支并行的正则约束分支得到,所述正则约束分支用于对所述U-Net框架的编码器分支进行正则约束;A model training unit is used to generate an initial image segmentation model, and input the training set into the initial image segmentation model for model training to obtain an image segmentation model for image structure segmentation, the initial image segmentation model is in U-Net The central layer of the framework is obtained by adding a regular constraint branch parallel to the decoder branch of the U-Net framework, and the regular constraint branch is used to perform regular constraints on the encoder branch of the U-Net framework; 分割标记单元,用于将待分割的脏器图像数据输入到所述图像分割模型,以对所述待分割的脏器图像数据进行分割标记。The segmentation marking unit is used for inputting the organ image data to be segmented into the image segmentation model, so as to perform segmentation and marking on the organ image data to be segmented. 9.一种终端设备,其特征在于,包括:9. A terminal device, comprising: 一个或多个处理器;one or more processors; 存储器,用于存储一个或多个程序;memory for storing one or more programs; 当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7任一所述的基于深度学习的图像分割方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the deep learning-based image segmentation method according to any one of claims 1-7. 10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7任一所述的基于深度学习的图像分割方法。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the deep learning-based image segmentation method according to any one of claims 1-7 is implemented.
CN202010544386.8A 2020-06-15 2020-06-15 Image segmentation methods, devices, equipment and storage media based on deep learning Active CN111815569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010544386.8A CN111815569B (en) 2020-06-15 2020-06-15 Image segmentation methods, devices, equipment and storage media based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010544386.8A CN111815569B (en) 2020-06-15 2020-06-15 Image segmentation methods, devices, equipment and storage media based on deep learning

Publications (2)

Publication Number Publication Date
CN111815569A true CN111815569A (en) 2020-10-23
CN111815569B CN111815569B (en) 2024-03-29

Family

ID=72846111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010544386.8A Active CN111815569B (en) 2020-06-15 2020-06-15 Image segmentation methods, devices, equipment and storage media based on deep learning

Country Status (1)

Country Link
CN (1) CN111815569B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613517A (en) * 2020-12-17 2021-04-06 深圳大学 Endoscopic instrument segmentation method, endoscopic instrument segmentation apparatus, computer device, and storage medium
CN113192025A (en) * 2021-04-28 2021-07-30 珠海横乐医学科技有限公司 Multi-organ segmentation method and medium for radiation particle internal radiotherapy interventional operation robot
CN115439652A (en) * 2022-09-06 2022-12-06 北京航空航天大学 Lesion area segmentation method and device based on normal tissue image information comparison

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110170768A1 (en) * 2010-01-11 2011-07-14 Tandent Vision Science, Inc. Image segregation system with method for handling textures
CN103440513A (en) * 2013-09-17 2013-12-11 西安电子科技大学 Method for determining specific visual cognition state of brain based on sparse nonnegative tensor factorization (SNTF)
US20140270447A1 (en) * 2013-03-13 2014-09-18 Emory University Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
US20160140438A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Hyper-class Augmented and Regularized Deep Learning for Fine-grained Image Classification
US20160248440A1 (en) * 2015-02-11 2016-08-25 Daniel Greenfield System and method for compressing data using asymmetric numeral systems with probability distributions
CN107610121A (en) * 2017-09-28 2018-01-19 河北大学 A kind of initial pose establishing method of liver statistical shape model
CN109886971A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image partition method and system based on convolutional neural networks
CN109919954A (en) * 2019-03-08 2019-06-21 广州视源电子科技股份有限公司 target object identification method and device
CN109949309A (en) * 2019-03-18 2019-06-28 安徽紫薇帝星数字科技有限公司 A kind of CT image for liver dividing method based on deep learning
US20190244357A1 (en) * 2018-02-07 2019-08-08 International Business Machines Corporation System for Segmentation of Anatomical Structures in Cardiac CTA Using Fully Convolutional Neural Networks
CN110517235A (en) * 2019-08-19 2019-11-29 苏州大学 A method for automatic segmentation of choroid in OCT images based on GCS-Net
CN110674824A (en) * 2019-09-26 2020-01-10 五邑大学 Finger vein segmentation method and device based on R2U-Net and storage medium
CN111210404A (en) * 2019-12-24 2020-05-29 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Method and device for classifying lens segmentation difficulty

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110170768A1 (en) * 2010-01-11 2011-07-14 Tandent Vision Science, Inc. Image segregation system with method for handling textures
US20140270447A1 (en) * 2013-03-13 2014-09-18 Emory University Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
CN103440513A (en) * 2013-09-17 2013-12-11 西安电子科技大学 Method for determining specific visual cognition state of brain based on sparse nonnegative tensor factorization (SNTF)
US20160140438A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Hyper-class Augmented and Regularized Deep Learning for Fine-grained Image Classification
US20160248440A1 (en) * 2015-02-11 2016-08-25 Daniel Greenfield System and method for compressing data using asymmetric numeral systems with probability distributions
CN107610121A (en) * 2017-09-28 2018-01-19 河北大学 A kind of initial pose establishing method of liver statistical shape model
US20190244357A1 (en) * 2018-02-07 2019-08-08 International Business Machines Corporation System for Segmentation of Anatomical Structures in Cardiac CTA Using Fully Convolutional Neural Networks
CN109886971A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image partition method and system based on convolutional neural networks
CN109919954A (en) * 2019-03-08 2019-06-21 广州视源电子科技股份有限公司 target object identification method and device
CN109949309A (en) * 2019-03-18 2019-06-28 安徽紫薇帝星数字科技有限公司 A kind of CT image for liver dividing method based on deep learning
CN110517235A (en) * 2019-08-19 2019-11-29 苏州大学 A method for automatic segmentation of choroid in OCT images based on GCS-Net
CN110674824A (en) * 2019-09-26 2020-01-10 五邑大学 Finger vein segmentation method and device based on R2U-Net and storage medium
CN111210404A (en) * 2019-12-24 2020-05-29 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Method and device for classifying lens segmentation difficulty

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
程俊龙等: "基于深度学习的脑部核磁图像分割算法", 轻工科技, no. 08 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613517A (en) * 2020-12-17 2021-04-06 深圳大学 Endoscopic instrument segmentation method, endoscopic instrument segmentation apparatus, computer device, and storage medium
CN112613517B (en) * 2020-12-17 2022-02-18 深圳大学 Endoscopic instrument segmentation method, endoscopic instrument segmentation apparatus, computer device, and storage medium
CN113192025A (en) * 2021-04-28 2021-07-30 珠海横乐医学科技有限公司 Multi-organ segmentation method and medium for radiation particle internal radiotherapy interventional operation robot
CN115439652A (en) * 2022-09-06 2022-12-06 北京航空航天大学 Lesion area segmentation method and device based on normal tissue image information comparison

Also Published As

Publication number Publication date
CN111815569B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN113516659B (en) An automatic segmentation method of medical images based on deep learning
CN110675406A (en) CT image kidney segmentation algorithm based on residual double-attention depth network
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN111815569B (en) Image segmentation methods, devices, equipment and storage media based on deep learning
Zhang et al. Automatic labeling of MR brain images by hierarchical learning of atlas forests
CN115496720A (en) Gastrointestinal cancer pathological image segmentation method and related equipment based on ViT mechanism model
CN118657800B (en) Joint segmentation method of multiple lesions in retinal OCT images based on hybrid network
Arega et al. Using MRI-specific data augmentation to enhance the segmentation of right ventricle in multi-disease, multi-center and multi-view cardiac MRI
CN116091412A (en) Method for segmenting tumor from PET/CT image
CN116894969A (en) Methods and systems for classifying risk of malignancy in the kidney and training thereof
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
Nour et al. Skin lesion segmentation based on edge attention vnet with balanced focal tversky loss
CN114298180A (en) A multimodal brain imaging feature learning method
CN114693671A (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN117218135B (en) Method and related equipment for segmenting plateau pulmonary edema chest film focus based on transducer
CN117409019B (en) Multi-mode brain tumor image segmentation method and system based on ensemble learning
Basu Analyzing Alzheimer's disease progression from sequential magnetic resonance imaging scans using deep convolutional neural networks
Yuan et al. Automatic construction of filter tree by genetic programming for ultrasound guidance image segmentation
Wang et al. Optimization algorithm of CT image edge segmentation using improved convolution neural network
Francis et al. SABOS‐Net: Self‐supervised attention based network for automatic organ segmentation of head and neck CT images
Anu et al. MRI denoising with residual connections and two-way scaling using unsupervised swin Convolutional U-Net Transformer (USCUNT)
Jucevičius et al. Investigation of MRI prostate localization using different MRI modality scans
CN116777835A (en) Medical image generation and metabolism evaluation method, system, device and medium
CN116205844A (en) Full-automatic heart magnetic resonance imaging segmentation method based on expansion residual error network
CN119107325B (en) Three-dimensional medical image segmentation method based on prompt driving mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant