CN107403201A - Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method - Google Patents
Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method Download PDFInfo
- Publication number
- CN107403201A CN107403201A CN201710687331.0A CN201710687331A CN107403201A CN 107403201 A CN107403201 A CN 107403201A CN 201710687331 A CN201710687331 A CN 201710687331A CN 107403201 A CN107403201 A CN 107403201A
- Authority
- CN
- China
- Prior art keywords
- tumor
- image
- target area
- organs
- risk
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
The present invention is tumour radiotherapy target area and jeopardizes organ intellectuality, automation delineation method, and step is:1)Tumour is multi-modal(Formula)Image reconstruction, denoising, enhancing, registration, fusion etc. pre-process;2)Tumor imaging Automatic signature extraction:Automatically from pretreated CT, CBCT, MRI, PET and(Or)Ultrasound etc. is multi-modal(Formula)One or more tumor imaging groups are extracted in medical oncology view data(Textural characteristics are composed)Information;3)Using deep learning, machine learning, artificial intelligence, region growing, graph theory(Random walk), geometry level set and(Or)Statistical methods, carry out tumour radiotherapy target area and jeopardize the intellectuality of organ, automate and delineate.Tumour radiotherapy target area can accurately be delineated using the present invention(GTV)With jeopardize organ (OAR).
Description
Technical Field
The invention relates to the technical fields of medical images, medical image analysis and processing, deep learning, machine learning, big data analysis, artificial intelligence, tumor radiation physics, radiobiology, radiotherapy, biomedical engineering and the like; in particular to a method for intelligently and automatically extracting tumor medical image characteristics and a method for intelligently and automatically classifying, detecting, identifying and segmenting a tumor radiotherapy target area and a radiotherapy endangered organ; in particular, the intelligent and automatic delineation method for the tumor radiotherapy target area and the organs at risk is displayed.
Background
Radiation therapy of tumor is one of three tumor treatment techniques at present. Precise radiotherapy for malignant tumors relies on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Cone Beam Computed Tomography (CBCT) techniques and corresponding medical image information intelligent processing techniques. High-precision delineation of tumor radiotherapy target regions, or gross tumor regions (GTVs), is a prerequisite and key technology for successful implementation of precise radiotherapy. The current radiotherapy target area automatic delineation technology based on tumor CT, MRI, PET and CBCT image information can not meet the clinical radiotherapy requirements. Clinically, the clinical radiotherapy doctor manually delineates the tumor GTV, so that the efficiency is low, the subjectivity is high, the delineating result is inaccurate, and the accuracy of a radiotherapy plan and the curative effect of treatment are influenced.
Therefore, there is a need to provide an intelligent and automatic delineation method for tumor radiotherapy target area and organs at risk to solve the above problems.
Disclosure of Invention
The invention aims to provide an intelligent and automatic delineation method for a tumor radiotherapy target area and a dangerous organ so as to solve the problems that the prior clinical manual delineation of the target area has low efficiency, strong subjectivity and poor precision, and finally influences the precision and the treatment effect of a radiotherapy plan.
The invention realizes the purpose through the following technical scheme:
an intelligent and automatic delineation method for a tumor radiotherapy target area and a endangered organ comprises the following steps:
1-1) tumor image preprocessing: preprocessing such as three-dimensional reconstruction, denoising, enhancement, registration and fusion of tumor medical images such as CT, CBCT, MRI and PET;
1-2) automatic extraction of tumor image features: automatically extracting one or more tumor image group (texture feature spectrum) information from preprocessed CT, CBCT, MRI, PET, and/or ultrasound multi-modal (formula) tumor medical image data; including, but not limited to: 1) first order statistical texture features (variance, skewness, kurtosis); 2) texture features (contrast, frequency, roughness, complexity, texture intensity) based on the neighborhood gray level difference matrix; 3) texture features based on the gray level run matrix (short run advantage, long run advantage, gray level non-uniformity, run percentage, low gray level run advantage, high gray level run advantage, short run low gray level advantage, short run high gray level advantage, long run low gray level advantage, long run high gray level advantage); 4) texture features based on gray level co-occurrence matrices (energy/angular second moment, entropy, contrast, inverse difference moment, correlation, variance, mean sum, variance sum of differences, entropy of differences, cluster shadow, significant cluster, maximum probability); 5) texture features based on a gray level region size matrix; 6) image features based on an adaptive regression kernel; 7) multi-level and implicit tumor image characteristics and the like acquired based on three-dimensional deep convolutional neural network deep learning;
1-3) intelligent, automatic delineation of tumor radiotherapy target area (GTV) and organs at risk: the intelligent and automatic delineation of tumor radiotherapy target areas (GTV) and organs at risk is carried out by adopting deep learning, machine learning, artificial intelligence, region growing, graph theory (random walk), geometric level set and/or statistical theory methods.
Further, the image preprocessing in step 1-1) comprises the following steps:
2-1) image acquisition:
(1) PET/CT, PET/MRI, CT and other images for tumor diagnosis, and radiotherapy simulated positioning CT (SCT) for treatment planning. The images are scanned, imaged and three-dimensionally reconstructed and filed by commercial imaging equipment of a hospital, and DICOM files of the images are exported by a clinical PACS system of the hospital to obtain, wherein the DICOM image files also comprise parameter information of each scanning and imaging;
(2) on-board cone beam ct (cbct), MRI or ultrasound guided images for radiotherapy guidance;
2-2) pretreatment: the method comprises the following steps:
(1) extracting relevant information such as images, resolution, layer thickness, coordinates and the like;
(2) using SCT as a reference image, adopting a rigid coarse registration and combined high-precision deformation elastic fine registration method to register and interpolate images of various modes into images with space resolution such as SCT and the like;
(3) removing a machine frame in CT imaging; denoising the PET image; image enhancement processing; normalization processing of each mode, namely normalizing the image mean value to be 0 and normalizing the variance;
(4) and (5) multi-modal image fusion.
Further, in the step 1-2), the tumor image features are extracted by using an adaptive regression kernel. Some normal pixel points and tumor pixel points have the same SUV value, but the adaptive regression kernels of the points have obvious difference, the change of the gray value and the texture of the image can be effectively represented by using the adaptive regression kernels, and the specific steps are as follows:
3-1) estimating an adaptive regression kernel function value by using an image local neighborhood covariance matrix, wherein the corresponding kernel function is defined as follows:
(1)
wherein,is shown asA pixel point expressed in 3D coordinate form;Is shown asEach pixel pointThe gray value of (d);is thatNearby miningSampling points;representing sample pointsThe gray value of (a);a local neighborhood covariance matrix of the image is obtained;=1,his an adjustable parameter;
3-2) the local edge structure of the image is related to the gradient covariance matrix of the image gray scale, and the local neighborhood covariance matrix of the image can be estimated by using the gradient information of the image gray scaleBy usingExpressed as:
(2)
whereinIs composed ofA gray scale gradient matrix of (d);
3-3) the medical image is three-dimensional volume data, an image three-dimensional space self-adaptive regression kernel needs to be calculated, and a gradient matrix of the kernel is as follows:
(3)
wherein,is a function of the grey value of the imageAt a pixel pointThe derivative values of the three orthogonal directions of (a),is a pixel point in a region of interest (ROI) of a tumor imageHas a size of,WhereinnIs positive odd;
3-4) covariance estimation matrix is not full rank and unstable in general, therefore, the gradient matrix is subjected to eigenvalue decomposition by a regularization method, and the expression is as follows:
(4)
(5)
wherein,respectively as to the telescoping and the scale parameters,is a regularization parameter, which has a suppressing effect on noise, so thatDenominator of (1) andis not zero. In the experiment of the embodiment of the invention, take. Characteristic valueAnd feature vectorsByAnd decomposing the characteristic value to obtain:
。 (6)
further, step 1-3) is a random walk tumor delineation method integrating adaptive regression kernels, and the method specifically comprises the following steps:
4-1) defining each pixel of the image as a vertex of the graph, and defining the similarity of spatially adjacent pixels as the weight of the edge connecting the corresponding pixels (vertices), thereby constructing an undirected weighted graphWhereinis a set of the vertices in the graph,is a collection of edges that are to be considered,is connecting adjacent vertices in the graphAndan edge of which the weight is. The weight value represents the probability that the random walker walks along the edge, and the weight value of the edge between two non-adjacent vertexes in the weighted graph is 0, namely the random walker does not pass through the edge;
4-2) calculating the edge weight of the integrated adaptive regression kernel by using the following formula:
(7)
4-3) random walker from any vertexThe probability of first reaching a labeled class of target vertices is the same as the solution to the so-called "Dirichlet minimization problem", and thus the optimal solution to the Dirichlet minimum problem objective function in random walk can be solved by solving the image segmentationTo obtain:
(8)
wherein,the probability of reaching the marked seed point of the object of the certain type from the pixel point of each certain non-seed point is shown. Laplace matrixLIs defined as:
(9)
and vertexIn the context of a correlation, the correlation,representing verticesDegree of (i.e. all and the vertex)Sum of weights of connected edges;
4-4) dividing the vertexes in the graph into two types, namely a marked pixel point set of a certain object classAnd unmarked pixel point set. And isDepending on the different vertices belonging to different sets, the laplacian matrix can be decomposed into the following form:
(10)
the Dirichlet problem can be decomposed into the following form:
(11)
wherein,respectively representing probability vectors of random walkers reaching marked pixel points of a certain object class from a marked pixel point and unmarked pixel points for the first time;
to solve the optimal solution of the minimum problem (11), one can solveIn thatAnd making it equal to zero, obtaining the following algebraic matrix equation and solving:
(12)
4-5) delineating the tumor and the organs at risk according to the probability value of the solved result in the 4-4). From the object class probability vector corresponding to each pixel pointAnd selecting the category corresponding to the maximum probability value, and classifying and marking the pixel point. If the probability threshold of the selected tumor region is 0.5, the pixel points with the probability value larger than 0.5 are marked as the points of the tumor region, otherwise, the pixel points are marked as the points of the normal tissue region.
Further, 1) in the step 1-2), tumor image features are automatically extracted by utilizing a deep neural network; 2) and (3) in the step 1-3), a three-dimensional (3D) symmetrical depth convolution neural network (AgideepIRT) is adopted to realize the delineation of a tumor target area and an organ at risk. The method specifically comprises the following steps:
5-1) deep learning of a tumor radiotherapy target area and a critical organ delineation network model: extracting deep learning characteristics from a tumor radiotherapy target area or an endangered organ classification, identification and detection integrated AgideepIRT network model by adopting a 3D symmetrical deep convolution neural network AgideepIRT integrating multi-modal images and multi-scale image characteristic information; and (3) supervising and training the AgideepIRT network model by utilizing the tumor radiotherapy target area and the endangered organ information outlined by a clinician.
5-2) delineation of target area and organs at risk in tumor radiotherapy: based on the trained and learned AgideepIRT network model, classification, identification, detection and delineation of a target area and organs at risk of tumor radiotherapy are carried out. The method comprises the following specific steps:
1) establishing a classification recognition network model: because different tissues and organs or tumor target areas have different image characteristics, the embodiment of the invention simultaneously adopts tumor CT, PET and MRI images as the input of an AgideepIRT network, trains and learns the multi-level and high-level image characteristics of the tumor radiotherapy target area and the organs at risk by utilizing a multi-level migration learning and supervised learning method, and respectively establishes detection, classification and identification models of the tumor radiotherapy target area and each organ at risk;
2) from coarse to fine, the target area and organs at risk of tumor radiotherapy are automatically delineated: determining whether the organ has a lesion by using a classification recognition model of normal organ tissues; then, inputting the images of the lesion areas into a tumor target area detection, classification and identification network AgideepIRT; and finally, further post-processing the detection, classification and identification results of the AgideepIRT of the tumor target area in combination with the prior knowledge of clinical tumor radiotherapy experts, and finally delineating the tumor radiotherapy target area with high precision.
Further, AgideepIRT is based on 3D convolution residual error learning moduleRes(Ch,l,k) The realized symmetrical network outputs the size and the order of the result through the processing of each stageThe modal input images are the same in size, and intensive prediction is achieved. And (3) performing end-to-end 3D deep supervised learning by adopting a DICE similarity coefficient as an AgideepIRT training learning optimization objective function.
Furthermore, AgideepIRT fuses, converges and integrates the features with the same resolution to the feature information of the corresponding layer through skip connection, and combines the features of different layers, so that multi-scale information from coarse to fine is effectively fused, and the final sketching result is refined.
Further, the deep neural network learning method in the step 5-1) comprises multi-stage transfer learning and supervised learning.
Further, the multi-level migration learning is divided into 4 levels according to the principle of maximum feature similarity:
9-1) migrating the ImageNet from the natural image target area to a tumor radiotherapy target area and identifying PET, CT and MRI image sequence sets of organs at risk. According to the implementation example, a natural image set ImageNet is adopted to pre-train and learn AgideepIRT network characteristic parameters, and a pre-training result is used as an initial value to be migrated to an AgidepIRT network characteristic parameter fine tuning learning process;
9-2) migration from one anatomical region to another: firstly, deep neural network training learning is started from a head and neck region; because the PET, CT and MRI images of tumors in different anatomical regions have certain similarity, the network parameters learned through the training of the head and neck regions can be transferred to the brain and the chest and abdomen regions and then transferred to the pelvic region from the chest and abdomen regions;
9-3) migration from one tumor/organ to other tumors, organs within the same anatomical region: the examples of this patent migrated from nasopharyngeal carcinoma to head and neck tumors, lung cancer to thoracic and abdominal tumors, prostate cancer to pelvic tumors, brain glioma to brain tumors;
9-4) tumor migration from normal organs into the same anatomical region.
Furthermore, supervised learning is to perform network fine training and learning by using information of certain tumors and organs at risk manually outlined by a clinician. Reversely adjusting all hidden layer nodes of the AgideepIRT network from top to bottom from an output layer through a supervised learning and back propagation algorithm; the network characteristic parameters are finely adjusted through supervised learning of an AgideepIRT network, the multi-level and high-level distinguishing characteristics of a specific tumor target area and organs at risk are extracted, and the accuracy of classification, detection and identification of the tumor and the organs at risk is improved; through a large amount of data training, a stable AgidepIRT organ-at-risk and tumor target area 3D delineation model is obtained.
The invention can draw the target area and the organs at risk of tumor radiotherapy with high precision, intelligence and automation.
Drawings
FIG. 1 is a schematic representation of the steps of an embodiment of the present invention; FIG. 2 is a schematic diagram showing the image preprocessing step in embodiment 1 of the present invention; FIG. 3 is a schematic texture diagram according to embodiment 2 of the present invention; FIG. 4 is a schematic tumor delineation of example 2 of the present invention; FIG. 5 is one of the schematic diagrams of embodiment 3 of the present invention; FIG. 6 is a second schematic diagram of embodiment 3 of the present invention; FIG. 7 is a third schematic view of embodiment 3 of the present invention; FIG. 8 is a fourth schematic diagram of embodiment 3 of the present invention; FIG. 9 is a fifth schematic view of embodiment 3 of the present invention; FIG. 10 is a sixth schematic view of embodiment 3 of the present invention; FIG. 11 is a seventh schematic view of embodiment 3 of the present invention; fig. 12 is an eighth schematic diagram of embodiment 3 of the present invention.
Detailed Description
Example 1:
the embodiment shows an image preprocessing method, which includes image acquisition and preprocessing of an acquired image:
image acquisition involves acquiring two parts of the image: (a) the system comprises PET/CT, PET/MRI, CT and other images for tumor diagnosis, radiotherapy simulation positioning CT (SCT) for treatment planning, imaging by hospital commercial imaging equipment, and obtaining by DICOM files derived from PACS system in hospital clinic, wherein the DICOM files also provide scanning parameter information. (b) On-board cone beam ct (cbct), MRI or ultrasound guided images for radiotherapy guidance.
Preprocessing image information of various modalities, and registering and interpolating images of various modalities into images with a spatial resolution of SCT and the like by taking SCT as a reference image. In particular, tumor PET/CT/MRI medical imaging systems have different imaging principles and features that provide complementary functional and anatomical information of the tumor and organs at risk to each other. The contrast between tumor PET and normal tissues is strong, the sensitivity is high, the biological specificity is good, but the spatial resolution is low, the partial container effect is large, and the noise is strong; MRI spatial resolution is high, contrast between soft tissues is strong, but artifacts are easily caused by gaps between tissues and organs and are difficult to avoid and correct; the CT spatial resolution is also high, but the contrast between soft tissues is low, so that the tumor and the normal tissues are difficult to distinguish; the ultrasonic image can continuously and dynamically observe the movement and the function of internal organs of a human body, but has the characteristics of large noise and easy generation of artifacts. Aiming at the defect of tumor target region delineation based on single CT, MRI or PET images, the high-precision delineation of tumor radiotherapy target regions and organs at risk needs to be carried out by combining all clinically available image information such as tumor diagnosis PET/CT, PET/MRI, CT and radiotherapy simulation positioning CT, on-line CBCT, MRI and the like.
To further illustrate the image preprocessing, the process of acquiring PET and radiotherapy simulation positioning SCT images from a DICOM file and preprocessing the images is illustrated, as shown in fig. 2. The method comprises the following steps:
(1) extracting relevant information such as images, resolution, layer thickness, coordinates and the like;
(2) using SCT as a reference image, adopting a rigid coarse registration and combined high-precision deformation elastic fine registration method to register and interpolate images of various modes into images with space resolution such as SCT and the like;
(3) removing a machine frame in CT imaging; denoising the PET image; image enhancement processing; and (4) normalization processing of each mode, namely normalizing the image mean value to be 0 and normalizing the variance.
Example 2:
this embodiment shows a random walk tumor delineation method integrating adaptive regression kernels:
the traditional random walk method only considers the gray information of an image, utilizes an edge weight value function of an undirected graph to represent the similarity between adjacent pixel points in space, and does not consider the neighborhood information of the pixels; the PET image of the tumor has obvious anisotropic property, which is mainly reflected in uneven distribution of SUV values in a tumor region; in the PET image, the SUV (Standard UptakeValue) value is influenced by the tissue metabolism intensity, and the more active the metabolism, the higher the corresponding SUV value; the metabolism of normal brain tissues around the head and neck tumor is also active, the SUV value distribution is close to the tumor region, and the side weight function is close to 1, so that the random walk algorithm cannot effectively distinguish the tumor regions. Therefore, target delineation using only PET SUV values does not yield satisfactory results; aiming at the defects of the traditional random walk method, the tumor PET adaptive regression kernel is integrated into the random walk algorithm, the edge weight value construction method of the corresponding undirected graph is improved, and the capacity of distinguishing the tumor from the normal high-metabolic tissue is improved.
The method specifically comprises the following steps:
1) extracting tumor image features by using an adaptive regression kernel:
the self-adaptive regression kernel considers the gray information and the structural information of the image at the same time, and estimates a kernel function value by using an image local neighborhood covariance matrix, wherein the corresponding kernel function is defined as follows:
(1)
wherein,is shown asA pixel point expressed in 3D coordinate form;Is shown asEach pixel pointThe gray value of (d);is thatNearby sampling points;representing sample pointsThe gray value of (a);is an image local neighborhood covariance matrix.=1,hIs an adjustable parameter;
because the local edge structure of the image is related to the gradient covariance matrix of the image gray scale, the local neighborhood covariance matrix of the image can be estimated by using the gradient information of the image gray scaleBy usingExpressed as:
(2)
whereinIs composed ofA gray scale gradient matrix of (d);
the medical image is three-dimensional volume data, an image three-dimensional space self-adaptive regression kernel needs to be calculated, and a gradient matrix of the kernel is as follows:
(3)
wherein,is a function of the grey value of the imageAt a pixel pointThe derivative values of the three orthogonal directions of (a),is of interest in tumor imagingPixel points in a Region (ROI)Has a size of,WhereinnIs positive odd;
the estimation of the covariance estimation matrix in equation (2) is generally not full rank and is not stable. To better estimate the covariance matrixAnd decomposing the eigenvalue of the gradient matrix by adopting a regularization method, wherein the expression is as follows:
(4)
(5)
wherein,respectively as to the telescoping and the scale parameters,is a regularization parameter, which has a suppressing effect on noise, so thatDenominator of (1) andis not zero. In the experiment, take. Characteristic valueAnd feature vectorsByAnd decomposing the characteristic value to obtain:
(6)
the adaptive regression kernels of the normal tissue and the tumor region are obviously different, as shown in fig. 3, wherein a pixel point a is located inside the tumor, a pixel point b is located inside the normal brain tissue, a and b have the same SUV value, but the adaptive regression kernels of the two points are obviously different, and the adaptive regression kernels can effectively represent the change of the gray value and the texture of the image, and are beneficial to high-precision delineation of the target region of tumor radiotherapy;
2) a random walk tumor delineation method integrating adaptive regression kernels is carried out, and the method comprises the following specific steps:
(1) defining each pixel of the image as a vertex of the graph, and defining the similarity of spatially adjacent pixels as the weight of the edge connecting the corresponding pixels (vertices), thereby constructing an undirected weighted graphWhereinis a set of the vertices in the graph,is a collection of edges that are to be considered,is connecting adjacent vertices in the graphAndan edge of which the weight is. The weight value represents the probability of the random walker walking along the edge, and the weight value of the edge between two non-adjacent vertexes in the weighted graph is 0, namely the random walker does not pass through the edge;
(2) and (3) calculating the edge weight value of the integrated adaptive regression kernel:
(7)
wherein,representing pixel points in a PET imageThe value of the SUV of (A) is,representing pixel pointsIs corresponding toThe column stretch vectors of the dimension adaptive regression kernel matrix,is of a size ofk is a positive odd number,is a free parameter. The fusion of equation (7) uses the PET SUV values and the adaptive kernel. SUV values in adjacent Normal tissue and tumor regionsWhen the distance is close to the reference distance,approximately zero, but with widely differing adaptive kernel textures,is greater than zero, at this moment, the pixel pointThe weight between the two tissues is reduced, thereby being beneficial to distinguishing the normal tissues of the tumor;
(3) random walker from arbitrary vertexThe probability of first reaching a labeled class of target vertices is the same as the solution to the so-called "Dirichlet minimization problem", and thus the optimal solution to the Dirichlet minimum problem objective function in random walk can be solved by solving the image segmentationTo obtain:
(8)
wherein,indicating the arrival of a pixel from each non-seed point at a pointProbability of class object having marked seed point, Laplace matrixLIs defined as:
(9)
and vertexIn the context of a correlation, the correlation,representing verticesDegree of (i.e. all and the vertex)Sum of weights of connected edges;
(4) in order to solve the Dirichlet problem of equation (8), the vertices in the graph are divided into two classes, i.e., a marked set of pixels in an object classAnd unmarked pixel point set. And isDepending on the different vertices belonging to different sets, the laplacian matrix can be decomposed into the following form:
(10)
dirichlet problem equation (8) can be decomposed into the following form:
(11)
wherein,respectively representing probability vectors of random walkers reaching marked pixel points of an object class from a marked pixel point and unmarked pixel points for the first time;
to solve the optimal solution of the minimum problem (11), one can solveIn thatAnd is made equal to zero, resulting in the following equation:
(12)
probability vector corresponding from each pixel pointAnd selecting the category corresponding to the maximum probability, and marking the pixel point. If the probability threshold of the selected tumor region is 0.5, the pixel points with the probability value larger than 0.5 are marked as the tumor region, otherwise, the pixel points are marked as the normal tissue region, and the tumor delineation is completed. Fig. 4 shows the effect of tumor delineation using the integrated adaptive regression kernel random walk method, which respectively shows the result of the adaptive regression kernel random walk method and the manual delineation result of the doctor visual resolution method.
Example 3:
the embodiment shows a method for delineating organs at risk and tumors based on a deep neural network learning method, which comprises the following steps: in the course of clinical radiotherapy planning of malignant tumor, it is necessary to precisely delineate the malignant tumor to achieve highly conformal tumor irradiation, and to precisely delineate the normal tissue and organ adjacent to the tumor to avoid irradiation of the normal tissue and organ to the maximum extent. By utilizing a deep learning method, a tumor radiotherapy target area and deep, high-level and implicit specific characteristics of organs at risk are obtained from a tumor PET/CT/MRI image through automatic learning. According to the characteristics, the target regions of organs at risk and malignant tumors are sketched in a high-precision mode, and the accuracy of sketching of the organs at risk and the tumors can be improved. The example combines all clinically available tumor CT, PET and MRI images to detect, classify, identify and delineate the target area and organs at risk for tumor radiotherapy by a deep neural network learning method.
The method specifically comprises the following steps:
step 1) adopting a 3D symmetrical deep convolutional neural network AgideepIRT integrating multi-modal and multi-scale feature information, and integrating deep learning feature extraction and tumor radiotherapy target areas or organs at risk classification, identification and detection in the same deep convolutional neural network model AgideepIRT as shown in FIG. 5. Carrying out supervised training on an AgideepIRT model by utilizing a tumor radiotherapy target area and organs at risk outlined by a clinician;
(1) AgideepIRT residual error learning module based on 3D convolutionRes(Ch,l,k) After the module input is subjected to l-layer convolution operation to extract features and nonlinear processing, learning is carried out by constructing a residual function. ModuleRes(Ch,l,k) The schematic diagram is shown in FIG. 6, in which the parameterskDetermining the size of the 3D convolution kernel ask*k*kParameter oflDetermining the number of convolution operations to be applied in the module, and the parametersChRepresenting the number of characteristic channels extracted by using a convolution kernel;
wherein: (a) 'Laishu' for medical purpose"symbolic representation convolution operation, can learn characteristics from data automatically, moduleResThe medium convolution kernel is typically 3 x 3. Convolutional layers typically employ multiple convolutional kernels for extracting multiple feature maps; .
Setting convolution kernel Is shown asl-Layer 1 feature mappingiAnd a firstlLayer feature mappingjThe convolution layer is used as a kernel Detecting local features at different input positions;
(13)
wherein, denotes a convolution operation, is a bias value, f (.) is a non-linear activation function. Specifically, a convolutional layerlTo middlejIndividual feature mapping According to its lower adjacencyl-1 layer ofFeature mapping of To calculate the time of the calculation of the time of the calculation, is shown asl-number of feature mappings for layer 1;
(b) “"symbol meanslResidual errors between the features extracted by the convolutional layers and the module inputs;
(c) “"notation represents a non-linear activation function, calculated using the PreLU method, using equation (14), whereaIs a very small constant, e.g.a=0.01.
(14)
(2) The size of the image data input by AgideepIRT is set according to specific sketching and identification objects, and in the scheme, the integral power of 2 is adopted, and the size is 2 n *2 n *2 n . In practical application, the parameters can be adjustednOr interpolate the image to 2 n *2 n *2 n ;
(3) The input of the AgideepIRT network can be a single-mode image or a multi-mode image, if usedMData of a species mode is used as input, then the input vector isMA channel 2 n *2 n *2 n The volume of the size of the bag is,Mis a natural number of 1 or more;
(4) image data input by AgideepIRT needs to be preprocessed firstly: taking SCT as a reference image, adopting a rigid coarse registration and combined high-precision deformation elastic fine registration method to register and interpolate images of various modes into images with space resolution such as SCT and the like; denoising the PET and the ultrasonic image; normalization processing of each mode, namely normalizing the image mean value to be 0 and normalizing the variance; rejecting a gantry in CT imaging, etc., see FIG. 2 and its description;
(5) FIG. 5 is a symmetrical U-shaped network model, which is composed of a left path and a right path;
(6) the left network of fig. 5 is a compression path, performed for different resolutionsn-3) 3D convolution residual learning modules and downsampling operations. The method has the advantages that the size compression is realized by utilizing downsampling convolution while the characteristics are extracted;
(7) the right network of fig. 5 is an extension path, which can further extract features and extend the low resolution feature map using an upsampling convolution operation. By processing the respective resolutions corresponding to the left-hand pathn-4) 3D convolution residual learning modules and upsampling operations, the number of layers of convolution of a 3D convolution residual learning module being the same as the left counterpart module;
(8) representing a down-sampling operation with an output data size of half the input data size and reduced resolution, implemented by a convolution operation with a step size of 2 and a convolution kernel 2 x 2, as shown in fig. 7. The down sampling can reduce the representation size of the input information and increase the characteristic perception domain of the subsequent network layer;
(9) representing an upsampling operation whose output data size is 2 times the input data size, with improved resolution, implemented using a deconvolution operation with a step size of 2 and a convolution kernel of 2 x 2, as shown in fig. 8. Upsampling may expand the representation size of the input information to blend with features of the same scale on the left;
(10) the number of feature maps extracted by each 3D convolution residual learning module (except the 1 st module) on a compression path is 2 times of the number of feature maps of the previous module;
(11) the number of the feature mapping graphs extracted by each 3D convolution residual error learning module on the extended path is half of the number of the feature mapping graphs of the previous module;
(12) by expanding on viasn4) up-sampling operations, expanding the low resolution feature map, and restoring the same size of the input data layer by layer;
(13) feature maps extracted by each 3D convolution residual learning module of the left path are fused with feature maps of the right path with the same resolution through jump connection (horizontal gray arrows in FIG. 5), details lost in the down-sampling compression process are compensated, and a final prediction delineation result is refined;
(14) the skip connection is integrated with the feature information of the corresponding layer, and the features of different layers are combined, so that the coarse and fine multi-scale information is effectively fused;
(15) all the characteristic information extracted by the network is finally formed by the convolution layer of convolution kernels with the size of 1 × 1LRealizing full-connection operation, obtaining probability estimation of two channels with the same size as the input volume data, and respectively representing the probability that corresponding voxels are tumor (or OAR) and background;
(16) tumors and non-tumors were classified by soft-max. Let the coordinate of a certain voxel be x, the voxel x is predicted as a categorycThe posterior probability of (2) is estimated by equation (15). Wherein,Lthe last convolutional layer is shown as a last convolutional layer, indicates the total number of categories, in this example It means both tumor and non-tumor. By comparison And determines whether it ultimately belongs to a tumor or a non-tumor;
(15)
(17) each stage in fig. 5 is implemented by using a full convolution network, and through the processing of each stage, the size of an output result is the same as the size of a single-mode image input image, so that the intensive prediction is realized;
(18) and (3) performing end-to-end 3D deep supervised learning by adopting a DICE similarity coefficient as an AgideepIRT training learning optimization objective function. The volume to be sketched is set to compriseVThe number of the individual voxels,PTis a predicted binary segmentation resultPT= 0, 1, thiPrediction of individual voxels ,TIs a binary sketch of a pathological gold standard or clinical expertMarkingT= 0, 1, the second drawn by clinical expertiOf individual voxels The DICE coefficient can be calculated using equation (16):
(16)
first, thejThe DICE coefficient gradient corresponding to each voxel is derived from equation (16) and is calculated by equation (17):
(17)
(19) fig. 5 uses the AgidepIRT model to classify, identify and detect the radiotherapy target area and organs at risk, and the detection result may have burr and very small hole areas. The embodiment of the invention further combines the prior knowledge and geometric constraint of clinical experts of radiotherapy, adopts post-processing methods such as a 3D conditional random field CRF, a level set method, a superpixel method or morphological filtering and the like, further processes the classification, identification and detection results of the normal tissue organ or the radiotherapy target area obtained by AgideepIRT, and finally delineates the normal tissue organ or the tumor radiotherapy target area with high precision.
Step 2):
the method for identifying and delineating the target region of the tumor from the coarse tissue organ to the fine tissue organ is adopted, and is shown in fig. 9. The classification, identification and detection of the normal organ tissues and the target area of tumor radiotherapy in fig. 9 all adopt the corresponding AgideepIRT network model which is trained and learned in the step 1). Because different tissues and organs or tumor target areas have different image characteristics, multi-level and high-level image characteristics of normal organ tissues or tumor target areas are learned by combining CT, PET and MRI images as input of an AgideepIRT network, and detection, classification and identification models of the normal organ tissues or tumor target areas are respectively established;
the AgideepIRT normal organ classification recognition modeling and the tumor classification recognition modeling are different in the difference of supervision images, a specific normal organ label image and a certain tumor label image are respectively adopted, the normal organ or tumor classification recognition network characteristic parameters are finely adjusted by a supervision learning and back propagation method, and the trained corresponding AgidepIRT network highest layer is used as a characteristic model of the current organ or tumor;
fig. 9 automatically outlines normal tissues and organs and tumor target areas from coarse to fine. Firstly, determining whether the organ has lesion by using a classification recognition model of normal organ tissues; then, inputting the images of the lesion areas into a tumor target area detection, classification and identification network AgideepIRT; and finally, further post-processing the detection, classification and identification results of the AgideepIRT of the tumor target area in combination with the prior knowledge of clinical tumor radiotherapy experts, and finally delineating the tumor radiotherapy target area with high precision.
Step 3):
the premise of successful application of deep learning is to have massive data, especially enough marked data. However, the labeling of tumors and organs at risk on a high-resolution three-dimensional PET, CT, MRI image sequence set is not a simple task, which not only needs a great deal of time and effort of clinical experts, which are already in short supply, but also is difficult to achieve objective and correct labeling. Therefore, the deep neural network training and learning of the tumor essential image characteristics cannot be carried out by using massive image gold standard training data marked by clinical experts at present. The AgideepIRT network learning method adopting the joint transfer learning and the supervised learning is matched for realizing:
the transfer learning is to train the characteristic parameters of the deep neural network by using a sample data set (so-called source domain training sample set) of a target which is not to be classified and recognized, and retrain the network of the target to be classified and recognized by using the trained network parameters as initial parameters of the deep neural network to be classified and recognized. As shown in fig. 10, the source domain sample set is a natural image set, and the target domain sample set is a medical image set;
adopting a multi-level migration learning method, and carrying out AgidepIRT migration learning by 4 levels according to the principle of maximum feature similarity:
(1) and (3) migrating from a natural image target identification set ImageNET to a tumor PET, CT and MRI image radiotherapy target area and a dangerous organ identification training set. Pre-training and learning AgideepIRT network characteristic parameters by using a natural image ImageNet, and transferring a pre-training result as an initial value to an AgidepIRT network characteristic parameter fine tuning learning process;
(2) migration from one anatomical region to another: the anatomy of the human body is most similar to natural images, while the anatomy of the head and neck is most pronounced. Deep neural network training learning begins first from the head and neck region. Because the PET, CT and MRI images of tumors in different anatomical regions have certain similarity, the network parameters learned through the training of the head and neck region can be transferred to the brain and the chest and abdomen region, and then transferred to the pelvic region from the chest and abdomen region;
(3) migration from one tumor/organ to other tumors, organs within the same anatomical region: can migrate from nasopharyngeal carcinoma to head and neck tumors, from lung cancer to thoracic and abdominal tumors, from prostate cancer to pelvic tumors, from brain glioma to brain tumors;
(4) tumor migration from normal organs into the same anatomical region;
on one hand, the combination of transfer learning and supervised learning can effectively combine the function and structure complementary information in PET, CT and MRI, and help to improve the precision of classification and delineation; on the other hand, by means of the characteristic parameter similarity of the natural image, the medical images of different parts and different types of tumors to a certain degree, a deep learning network is initialized by adopting a transfer learning method, then the network is finely adjusted by using label data, and all hidden layer parameters of the network are adjusted from top to bottom from an output layer through a supervised learning and back propagation algorithm, so that the normal organ and tumor characteristics in the medical image are better expressed. An AgideepIRT network learning method for joint transfer learning and supervised learning of the embodiment of the invention is shown in FIG. 11.
Initializing an AgideepIRT network by using random numbers, pre-training the network through an existing large-scale natural image set, migrating pre-trained network characteristic parameters to AgidepIRT, and obtaining AgidepIRT network characteristic parameters for classification and identification of certain tumors or organs at risk by adopting a training mode of supervised learning. The main processing blocks in fig. 11 are as follows:
(a) transfer learning: the method comprises the following steps that two inputs are provided, solid arrows represent network parameters multiplexed with the output of a previous module, and a source domain image sample set is a training sample set of the previous module; the hollow arrow represents the target domain image input; the module initializes the network parameters of the target domain by using the multiplexed network parameters, and trains the AgideepIRT network by using a sample set of the target domain with a relatively small scale. The three "migration learning DL" modules in fig. 11 represent network parameter multiplexing migration at different levels, respectively: migrating from a natural image domain to a medical image domain; migrating from a medical image domain of an anatomical region to a medical image domain of another anatomical region; from a tumor of a certain type, or organ at risk image domain, to a tumor, or organ at risk image domain, in the same vicinity. By means of the characteristic parameter similarity between the source domain and the target domain to a certain degree, the AgideepIRT network for classifying the tumor and the organ at risk has higher accuracy under the condition that effective tumor and organ at risk label data are relatively few;
(b) and (3) supervision and learning: and (3) finely adjusting the network by using certain tumor and organ-at-risk label data, and reversely adjusting all hidden layer nodes of the AgideepIRT network from top to bottom from an output layer through a supervised learning and back propagation algorithm. The network characteristic parameters are finely adjusted through supervised learning of an AgideepIRT network, multi-level and high-level distinguishing characteristics of specific tumors and organs at risk are extracted, and the accuracy of classification, detection and identification of the tumors and the organs at risk is improved.
Through a large amount of data training, a stable AgideepIRT organ-at-risk and tumor 3D delineation model is obtained. The tumor 3D delineation model was verified using the test images, and the results are shown in fig. 12.
By combining the embodiments 1-3, an intelligent delineation method of the tumor radiotherapy target area and the organs at risk can be obtained, as shown in fig. 1.
The method of the invention can be used for accurately, intelligently and automatically delineating the tumor radiotherapy target area and the organs at risk.
What has been described above are merely some embodiments of the present invention. It will be apparent to those skilled in the art that various changes and modifications can be made without departing from the inventive concept thereof, and these changes and modifications can be made without departing from the spirit and scope of the invention.
Claims (10)
1. An intelligent and automatic delineation method for tumor radiotherapy target areas and organs at risk is characterized in that: the method comprises the following steps:
1-1) tumor image preprocessing: carrying out preprocessing such as three-dimensional reconstruction, denoising, enhancement, registration and fusion of tumor medical images such as CT, CBCT, MRI and PET;
1-2) automatic extraction of tumor image features: automatically extracting one or more tumor image group (texture feature spectrum) information from preprocessed CT, CBCT, MRI, PET, and/or ultrasound multi-modal (formula) tumor medical image data; including, but not limited to: 1) first order statistical texture features (variance, skewness, kurtosis); 2) texture features (contrast, frequency, roughness, complexity, texture intensity) based on the neighborhood gray level difference matrix; 3) texture features based on the gray level run matrix (short run advantage, long run advantage, gray level non-uniformity, run percentage, low gray level run advantage, high gray level run advantage, short run low gray level advantage, short run high gray level advantage, long run low gray level advantage, long run high gray level advantage); 4) texture features based on gray level co-occurrence matrices (energy/angular second moment, entropy, contrast, inverse difference moment, correlation, variance, mean sum, variance sum of differences, entropy of differences, cluster shadow, significant cluster, maximum probability); 5) texture features based on a gray level region size matrix; 6) image features based on an adaptive regression kernel; 7) multi-level and implicit tumor image characteristics and the like acquired based on three-dimensional deep convolutional neural network deep learning;
1-3) intelligent and automatic delineation of tumor radiotherapy target area (GTV) and Organs At Risk (OAR): the intelligent and automatic delineation of tumor radiotherapy target areas (GTV) and Organs At Risk (OAR) is carried out by adopting deep learning, machine learning, artificial intelligence, region growing, graph theory (random walk), geometric level set and/or statistical theory methods.
2. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 1, wherein: the image preprocessing comprises the following steps:
2-1) image acquisition: includes acquiring an image of two parts:
(a) the images for tumor diagnosis, such as PET/CT, PET/MRI, CT and the like, and radiotherapy simulation positioning CT (SCT) for making a treatment plan are scanned, imaged and three-dimensionally reconstructed and filed by commercial imaging equipment of a hospital and are obtained by exporting DICOM files thereof through a clinical PACS system of the hospital, and the DICOM image files also comprise parameter information of each scanning and imaging;
(b) on-board cone beam ct (cbct), MRI or ultrasound guided images for radiotherapy guidance.
3.2-2) pretreatment: the method comprises the following steps:
(1) extracting relevant information such as images, resolution, layer thickness, coordinates and the like;
(2) using SCT as a reference image, adopting a rigid coarse registration and combined high-precision deformation elastic fine registration method to register and interpolate images of various modes into images with space resolution such as SCT and the like;
(3) removing a machine frame in CT imaging; denoising the PET image; image enhancement processing; normalization processing of each mode, namely normalizing the image mean value to be 0 and normalizing the variance;
(4) and (5) multi-mode image information fusion.
4. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 2, wherein: in the step 1-2), the adaptive regression kernel is used for extracting the characteristics of the tumor image, the adaptive regression kernels of the normal tissue pixel points and the tumor pixel points with the same SUV value have obvious difference, the adaptive regression kernel can be used for effectively representing the change of the gray value and the texture of the image, and the specific steps are as follows:
3-1) estimating an adaptive regression kernel function value by using an image local neighborhood covariance matrix, wherein the corresponding kernel function is defined as follows:
(1)
wherein,is shown asA pixel point expressed in 3D coordinate form;Is shown asEach pixel pointThe gray value of (d);is thatNearby sampling points;representing sample pointsThe gray value of (a);a local neighborhood covariance matrix of the image is obtained;=1,his an adjustable parameter;
3-2) the local edge structure of the image is related to the gradient covariance matrix of the image gray scale, and the local neighborhood covariance matrix of the image can be estimated by using the gradient information of the image gray scaleBy usingExpressed as:
(2)
whereinIs composed ofA gray scale gradient matrix of (d);
3-3) the medical image is three-dimensional volume data, an image three-dimensional space self-adaptive regression kernel needs to be calculated, and a gradient matrix of the kernel is as follows:
(3)
wherein,is a function of the grey value of the imageAt a pixel pointThe derivative values of the three orthogonal directions of (a),is a pixel point in a region of interest (ROI) in a tumor imageHas a size of,WhereinnIs positive odd;
3-4) covariance estimation matrix is not full rank and unstable in general, therefore, the gradient matrix is subjected to eigenvalue decomposition by a regularization method, and the expression is as follows:
(4)
(5)
wherein,respectively as to the telescoping and the scale parameters,is a regularization parameter, which has a suppressing effect on noise, so thatDenominator of (1) andis not zero, in the experiment of the embodiment of the invention, takeCharacteristic valueAnd feature vectorsByAnd decomposing the characteristic value to obtain:
(6)
the intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 3, wherein: the random walk delineation method adopting the integrated adaptive regression kernel in the step 1-3) comprises the following specific steps:
4-1) defining each pixel of the image as a vertex of the graph, and defining the similarity of spatially adjacent pixels as the weight of the edge connecting the corresponding pixels (vertices), thereby constructing an undirected weighted graphWhereinis a set of the vertices in the graph,is a collection of edges that are to be considered,is connecting adjacent vertices in the graphAndan edge of which the weight is(ii) a The weight value represents the probability that the random walker walks along the edge, and the weight value of the edge between two non-adjacent vertexes in the weighted graph is 0, namely the random walker does not pass through the edge;
4-2) calculating the edge weight of the integrated adaptive regression kernel by using the following formula:
(7)
4-3) random walker from any vertexThe probability of first reaching a labeled class of target vertices is the same as the solution to the so-called "Dirichlet minimization problem", and thus the optimal solution to the Dirichlet minimum problem objective function in random walk can be solved by solving the image segmentationTo obtain:
(8)
wherein,representing the probability that each pixel point from a non-seed point reaches a marked seed point of an object of a certain type;
laplace matrixLIs defined as:
(9)
and vertexIn the context of a correlation, the correlation,representing verticesDegree of (i.e. all and the vertex)Sum of weights of connected edges;
4-4) dividing the vertexes in the graph into two types, namely a marked pixel point set of a certain object classAnd unmarked pixel point set(ii) a And isDepending on the different vertices belonging to different sets, the laplacian matrix can be decomposed into the following form:
(10)
the Dirichlet problem can be decomposed into the following form:
(11)
wherein,respectively representing probability vectors of random walkers from a marked pixel point and an unmarked pixel point to a marked pixel point of a certain object class for the first time;
to solve the optimal solution of the Dirichlet problem, one can solveIn thatAnd making it equal to zero, obtaining the following algebraic matrix equation and solving:
(12)
4-5) performing tumor and organ-at-risk delineation according to the solved result (probability value) in the step 4-4); from the object class probability vector corresponding to each pixel pointSelecting the category corresponding to the maximum probability value to classify and mark the pixel point; if the probability threshold of the selected tumor region is 0.5, the pixel points with the probability value larger than 0.5 are marked as the points of the tumor region, otherwise, the pixel points are marked as the points of the normal tissue region.
5. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 2, wherein: 1) in the step 1-2), tumor image features are automatically extracted by utilizing a deep neural network; 2) in the step 1-3), a three-dimensional (3D) symmetrical depth convolution neural network (AgideepIRT) is adopted to realize the delineation of a tumor target area and an organ at risk, and the method specifically comprises the following steps:
5-1) deep learning of a tumor radiotherapy target area and a critical organ delineation network model: extracting deep learning characteristics from a tumor radiotherapy target area or an endangered organ classification, identification and detection integrated AgideepIRT network model by adopting a 3D symmetrical deep convolution neural network AgideepIRT integrating multi-modal images and multi-scale image characteristic information; monitoring and training an AgideepIRT network model by utilizing tumor radiotherapy target areas and organs at risk information outlined by a clinician;
5-2) delineation of target area and organs at risk in tumor radiotherapy: based on an AgideepIRT network model which is well trained and learned, classification, identification, detection and delineation of a tumor radiotherapy target area and organs at risk are carried out, and the method specifically comprises the following steps:
1) establishing a classification recognition network model: because different tissues and organs or tumor target areas have different image characteristics, the invention simultaneously adopts tumor CT, PET and MRI images as the input of AgideepIRT network, trains and learns the multi-level and high-level image characteristics of the tumor radiotherapy target area and the organs at risk by utilizing a multi-level migration learning and supervised learning method, and respectively establishes detection, classification and identification models of the tumor radiotherapy target area and each organ at risk;
2) from coarse to fine, the target area and organs at risk of tumor radiotherapy are automatically delineated: determining whether the organ has a lesion by using a classification recognition model of normal organ tissues; then, inputting the images of the lesion areas into a tumor target area detection, classification and identification network AgideepIRT; and finally, further post-processing the detection, classification and identification results of the AgideepIRT of the tumor target area in combination with the prior knowledge of clinical tumor radiotherapy experts, and finally delineating the tumor radiotherapy target area with high precision.
6. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 5, wherein: AgideepIRT residual error learning module based on 3D convolutionRes(Ch,l,k) The size of an output result of the realized symmetrical network is the same as that of a single-mode input image through processing of each stage, the intensive prediction is realized, and the end-to-end 3D deep supervised learning is carried out by adopting a DICE similarity coefficient as an AgideepIRT training and learning optimization objective function.
7. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 6, wherein: the AgideepIRT fuses, converges and integrates the features with the same resolution to the feature information of the corresponding layer through skip connection, and combines the features of different layers, thereby effectively fusing multi-scale information from coarse to fine and refining the final sketching result.
8. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 7, wherein: the deep neural network learning method in the step 5-1) comprises multi-stage transfer learning and supervised learning.
9. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 8, wherein: the multi-level migration learning is divided into 4 levels according to the principle of maximum feature similarity:
9-1) migrating the ImageNet from the natural image target identification distinction set to a tumor radiotherapy target area and an organ-at-risk identification PET, CT and MRI image sequence set, pre-training and learning AgideepIRT network characteristic parameters by adopting the natural image set ImageNet, and migrating a pre-training result as an initial value to an AgidepIRT network characteristic parameter fine tuning learning process;
9-2) migration from one anatomical region to another: firstly, deep neural network training learning is started from a head and neck region; because the PET, CT and MRI images of tumors in different anatomical regions have certain similarity, the network parameters learned through the training of the head and neck regions can be transferred to the brain and the chest and abdomen regions and then transferred to the pelvic region from the chest and abdomen regions;
9-3) migration from one tumor/organ to other tumors, organs within the same anatomical region: the examples of this patent migrated from nasopharyngeal carcinoma to head and neck tumors, lung cancer to thoracic and abdominal tumors, prostate cancer to pelvic tumors, brain glioma to brain tumors;
9-4) tumor migration from normal organs into the same anatomical region.
10. The intelligent and automatic delineation method of tumor radiotherapy target area and organs at risk according to claim 9, wherein: the supervised learning is to perform network fine-tuning training and learning by using information of certain tumors and organs at risk manually sketched by a clinician, and reversely adjust all hidden layer nodes of the AgideepIRT network from top to bottom from an output layer through a supervised learning and back propagation algorithm; the network characteristic parameters are finely adjusted through supervised learning of an AgideepIRT network, the multi-level and high-level distinguishing characteristics of a specific tumor target area and organs at risk are extracted, and the accuracy of classification, detection and identification of the tumor and the organs at risk is improved; through a large amount of data training, a stable AgidepIRT organ-at-risk and tumor target area 3D delineation model is obtained.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710687331.0A CN107403201A (en) | 2017-08-11 | 2017-08-11 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710687331.0A CN107403201A (en) | 2017-08-11 | 2017-08-11 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN107403201A true CN107403201A (en) | 2017-11-28 |
Family
ID=60396378
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710687331.0A Pending CN107403201A (en) | 2017-08-11 | 2017-08-11 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN107403201A (en) |
Cited By (144)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107910061A (en) * | 2017-12-01 | 2018-04-13 | 中南大学 | A kind of medical data processing method and system |
| CN108109152A (en) * | 2018-01-03 | 2018-06-01 | 深圳北航新兴产业技术研究院 | Medical Images Classification and dividing method and device |
| CN108170712A (en) * | 2017-11-29 | 2018-06-15 | 浙江大学 | A kind of method using the multi-media network study maximum boundary multi-media network expression comprising social geography information |
| CN108198181A (en) * | 2018-01-23 | 2018-06-22 | 电子科技大学 | A kind of thermal-induced imagery processing method based on region segmentation and image co-registration |
| CN108229658A (en) * | 2017-12-27 | 2018-06-29 | 深圳先进技术研究院 | The implementation method and device of object detector based on finite sample |
| CN108257134A (en) * | 2017-12-21 | 2018-07-06 | 深圳大学 | Nasopharyngeal Carcinoma Lesions automatic division method and system based on deep learning |
| CN108288496A (en) * | 2018-01-26 | 2018-07-17 | 中国人民解放军总医院 | Tumor volume intelligence delineation method and device |
| CN108447551A (en) * | 2018-02-09 | 2018-08-24 | 北京连心医疗科技有限公司 | A kind of automatic delineation method in target area based on deep learning, equipment and storage medium |
| CN108492873A (en) * | 2018-03-13 | 2018-09-04 | 山东大学 | A kind of knowledge migration learning method for auxiliary diagnosis Alzheimer's disease |
| CN108492309A (en) * | 2018-01-21 | 2018-09-04 | 西安电子科技大学 | Magnetic resonance image medium sized vein blood vessel segmentation method based on migration convolutional neural networks |
| CN108491856A (en) * | 2018-02-08 | 2018-09-04 | 西安电子科技大学 | A kind of image scene classification method based on Analysis On Multi-scale Features convolutional neural networks |
| CN108537773A (en) * | 2018-02-11 | 2018-09-14 | 中国科学院苏州生物医学工程技术研究所 | Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease |
| CN108564567A (en) * | 2018-03-15 | 2018-09-21 | 中山大学 | A kind of ultrahigh resolution pathological image cancerous region method for visualizing |
| CN108629772A (en) * | 2018-05-08 | 2018-10-09 | 上海商汤智能科技有限公司 | Image processing method and device, computer equipment and computer storage media |
| CN108648193A (en) * | 2018-06-06 | 2018-10-12 | 南方医科大学 | Biological tissue's image knows method for distinguishing and its system, computer storage media |
| CN108734210A (en) * | 2018-05-17 | 2018-11-02 | 浙江工业大学 | A kind of method for checking object based on cross-module state multi-scale feature fusion |
| CN108765417A (en) * | 2018-06-15 | 2018-11-06 | 西安邮电大学 | It is a kind of that system and method is generated based on the femur X-ray film of deep learning and digital reconstruction irradiation image |
| CN108815721A (en) * | 2018-05-18 | 2018-11-16 | 山东省肿瘤防治研究院(山东省肿瘤医院) | A kind of exposure dose determines method and system |
| CN108898608A (en) * | 2018-05-28 | 2018-11-27 | 广东技术师范学院 | A kind of prostate ultrasonic image division method and equipment |
| CN108898557A (en) * | 2018-05-30 | 2018-11-27 | 商汤集团有限公司 | Image restoration method and apparatus, electronic device, computer program, and storage medium |
| CN109003269A (en) * | 2018-07-19 | 2018-12-14 | 哈尔滨工业大学 | A kind of mark extracting method for the medical image lesion that can improve doctor's efficiency |
| CN109035160A (en) * | 2018-06-29 | 2018-12-18 | 哈尔滨商业大学 | The fusion method of medical image and the image detecting method learnt based on fusion medical image |
| CN109146899A (en) * | 2018-08-28 | 2019-01-04 | 众安信息技术服务有限公司 | CT image jeopardizes organ segmentation method and device |
| CN109146845A (en) * | 2018-07-16 | 2019-01-04 | 中南大学 | Head image sign point detecting method based on convolutional neural networks |
| CN109300136A (en) * | 2018-08-28 | 2019-02-01 | 众安信息技术服务有限公司 | An automatic segmentation method of organs at risk based on convolutional neural network |
| CN109330616A (en) * | 2018-12-08 | 2019-02-15 | 宁波耀通管阀科技有限公司 | Tumour hazard rating discrimination system |
| CN109378054A (en) * | 2018-12-13 | 2019-02-22 | 山西医科大学第医院 | A multi-modal image aided diagnosis system and its construction method |
| CN109378068A (en) * | 2018-08-21 | 2019-02-22 | 深圳大学 | A method and system for automatic evaluation of curative effect of nasopharyngeal carcinoma |
| CN109461161A (en) * | 2018-10-22 | 2019-03-12 | 北京连心医疗科技有限公司 | A method of human organ in medical image is split based on neural network |
| CN109523526A (en) * | 2018-11-08 | 2019-03-26 | 腾讯科技(深圳)有限公司 | Organize nodule detection and its model training method, device, equipment and system |
| CN109523584A (en) * | 2018-10-26 | 2019-03-26 | 上海联影医疗科技有限公司 | Image processing method, device, multi-mode imaging system, storage medium and equipment |
| CN109559303A (en) * | 2018-11-22 | 2019-04-02 | 广州达美智能科技有限公司 | Recognition methods, device and the computer readable storage medium of calcification point |
| CN109615642A (en) * | 2018-11-05 | 2019-04-12 | 北京全域医疗技术有限公司 | Jeopardize the automatic delineation method of organ and device in a kind of radiotherapy planning |
| CN109727235A (en) * | 2018-12-26 | 2019-05-07 | 苏州雷泰医疗科技有限公司 | A kind of automatic delineation algorithms of organ based on deep learning |
| CN109754860A (en) * | 2018-12-21 | 2019-05-14 | 苏州雷泰医疗科技有限公司 | A Method of Automatically Predicting Planning Difficulty Based on DVH Prediction Model |
| CN109785306A (en) * | 2019-01-09 | 2019-05-21 | 上海联影医疗科技有限公司 | Organ delineation method, device, computer equipment and storage medium |
| CN109858428A (en) * | 2019-01-28 | 2019-06-07 | 四川大学 | ANA flourescent sheet automatic identifying method based on machine learning and deep learning |
| CN109903272A (en) * | 2019-01-30 | 2019-06-18 | 西安天伟电子系统工程有限公司 | Object detection method, device, equipment, computer equipment and storage medium |
| CN109934796A (en) * | 2018-12-26 | 2019-06-25 | 苏州雷泰医疗科技有限公司 | A kind of automatic delineation method of organ based on Deep integrating study |
| CN109938764A (en) * | 2019-02-28 | 2019-06-28 | 佛山原子医疗设备有限公司 | A kind of adaptive multiple location scan imaging method and its system based on deep learning |
| CN109948667A (en) * | 2019-03-01 | 2019-06-28 | 桂林电子科技大学 | Image classification method and device for prediction of distant metastasis of head and neck cancer |
| CN109949288A (en) * | 2019-03-15 | 2019-06-28 | 上海联影智能医疗科技有限公司 | Tumor type determines system, method and storage medium |
| CN109978852A (en) * | 2019-03-22 | 2019-07-05 | 邃蓝智能科技(上海)有限公司 | The radiotherapy image Target delineations method and system of microtissue organ based on deep learning |
| CN110047080A (en) * | 2019-03-12 | 2019-07-23 | 天津大学 | A method of the multi-modal brain tumor image fine segmentation based on V-Net |
| CN110060300A (en) * | 2019-04-28 | 2019-07-26 | 中国科学技术大学 | A kind of CT image relative position prediction technique and system |
| CN110070546A (en) * | 2019-04-18 | 2019-07-30 | 山东师范大学 | A kind of multiple target based on deep learning jeopardizes the automatic division method of organ, apparatus and system |
| CN110136840A (en) * | 2019-05-17 | 2019-08-16 | 山东管理学院 | A medical image classification method, device and computer-readable storage medium based on self-weighted hierarchical biological features |
| CN110211139A (en) * | 2019-06-12 | 2019-09-06 | 安徽大学 | Automatic segmentation Radiotherapy of Esophageal Cancer target area and the method and system for jeopardizing organ |
| CN110232695A (en) * | 2019-06-18 | 2019-09-13 | 山东师范大学 | Left ventricle image partition method and system based on hybrid mode image |
| CN110264479A (en) * | 2019-06-25 | 2019-09-20 | 南京景三医疗科技有限公司 | Three-dimensional image segmentation method based on random walk and level set |
| CN110322426A (en) * | 2018-03-28 | 2019-10-11 | 北京连心医疗科技有限公司 | Tumor target delineation method, equipment and storage medium based on variable manikin |
| CN110378881A (en) * | 2019-07-05 | 2019-10-25 | 北京航空航天大学 | A kind of tumor-localizing system based on deep learning |
| CN110390351A (en) * | 2019-06-24 | 2019-10-29 | 浙江大学 | A kind of Epileptic focus three-dimensional automatic station-keeping system based on deep learning |
| CN110390660A (en) * | 2018-04-16 | 2019-10-29 | 北京连心医疗科技有限公司 | A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium |
| CN110415785A (en) * | 2019-08-29 | 2019-11-05 | 北京连心医疗科技有限公司 | The method and system of artificial intelligence guidance radiotherapy planning |
| CN110415239A (en) * | 2019-08-01 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment, medical treatment electronic equipment and medium |
| CN110415184A (en) * | 2019-06-28 | 2019-11-05 | 南开大学 | A Multimodal Image Enhancement Method Based on Orthogonal Metaspace |
| CN110473196A (en) * | 2019-08-14 | 2019-11-19 | 中南大学 | A kind of abdominal CT images target organ method for registering based on deep learning |
| CN110610527A (en) * | 2019-08-15 | 2019-12-24 | 苏州瑞派宁科技有限公司 | SUV calculation method, device, equipment, system and computer storage medium |
| CN110689521A (en) * | 2019-08-15 | 2020-01-14 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Automatic identification method and system for human body part to which medical image belongs |
| CN110706200A (en) * | 2019-09-02 | 2020-01-17 | 杭州深睿博联科技有限公司 | Data prediction method and device |
| CN110706749A (en) * | 2019-09-10 | 2020-01-17 | 至本医疗科技(上海)有限公司 | Cancer type prediction system and method based on tissue and organ differentiation hierarchical relation |
| CN110739049A (en) * | 2019-10-10 | 2020-01-31 | 上海联影智能医疗科技有限公司 | Image sketching method and device, storage medium and computer equipment |
| CN110766693A (en) * | 2018-09-06 | 2020-02-07 | 北京连心医疗科技有限公司 | Method for jointly predicting radiotherapy structure position based on multi-model neural network |
| CN110827961A (en) * | 2018-08-10 | 2020-02-21 | 北京连心医疗科技有限公司 | Automatic delineation method, device and storage medium for adaptive radiotherapy structure |
| CN110930421A (en) * | 2019-11-22 | 2020-03-27 | 电子科技大学 | A Segmentation Method for CBCT Tooth Image |
| CN110991458A (en) * | 2019-11-25 | 2020-04-10 | 创新奇智(北京)科技有限公司 | Artificial intelligence recognition result sampling system and sampling method based on image characteristics |
| CN110992406A (en) * | 2019-12-10 | 2020-04-10 | 张家港赛提菲克医疗器械有限公司 | Radiotherapy patient positioning rigid body registration algorithm based on region of interest |
| CN111047594A (en) * | 2019-11-06 | 2020-04-21 | 安徽医科大学 | Tumor MRI weak supervised learning analysis modeling method and model thereof |
| CN111091524A (en) * | 2018-10-08 | 2020-05-01 | 天津工业大学 | A segmentation method of prostate transrectal ultrasound images based on deep convolutional neural network |
| CN111127444A (en) * | 2019-12-26 | 2020-05-08 | 广州柏视医疗科技有限公司 | Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network |
| CN111161326A (en) * | 2018-11-08 | 2020-05-15 | 通用电气公司 | System and method for unsupervised deep learning for deformable image registration |
| WO2020099250A1 (en) * | 2018-11-13 | 2020-05-22 | Koninklijke Philips N.V. | Artificial intelligence (ai)-based standardized uptake value (suv) correction and variation assessment for positron emission tomography (pet) |
| CN111312373A (en) * | 2020-01-19 | 2020-06-19 | 浙江树人学院(浙江树人大学) | An automatic labeling method for PET/CT image fusion |
| CN111340135A (en) * | 2020-03-12 | 2020-06-26 | 广州领拓医疗科技有限公司 | Classification of small renal masses based on random projection |
| CN111374690A (en) * | 2018-12-28 | 2020-07-07 | 通用电气公司 | Medical imaging method and system |
| CN111420271A (en) * | 2020-04-02 | 2020-07-17 | 河北普尼医疗科技有限公司 | Electrode patch positioning method based on head tumor treatment |
| CN111477298A (en) * | 2020-04-03 | 2020-07-31 | 北京易康医疗科技有限公司 | A tracking method for tumor location changes during radiotherapy |
| CN111599445A (en) * | 2020-05-14 | 2020-08-28 | 安徽慧软科技有限公司 | Full-automatic CT image processing system for automatic drawing |
| CN111598883A (en) * | 2020-05-20 | 2020-08-28 | 重庆工程职业技术学院 | Calibration label equipment for acquiring cloud data medical image and working method |
| CN111613300A (en) * | 2019-02-22 | 2020-09-01 | 未艾医疗技术(深圳)有限公司 | Tumor and blood vessel Ai processing methods and products based on VRDS 4D medical images |
| CN111680758A (en) * | 2020-06-15 | 2020-09-18 | 杭州海康威视数字技术股份有限公司 | Image training sample generation method and device |
| CN111899850A (en) * | 2020-08-12 | 2020-11-06 | 上海依智医疗技术有限公司 | Medical image information processing method, display method and readable storage medium |
| US20200364910A1 (en) * | 2019-05-13 | 2020-11-19 | Adobe Inc. | Line drawing generation |
| CN112258457A (en) * | 2020-09-28 | 2021-01-22 | 汕头大学 | Multi-dimensional feature extraction method for full-volume three-dimensional ultrasonic image |
| CN112270660A (en) * | 2020-09-30 | 2021-01-26 | 四川大学 | Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network |
| CN112288683A (en) * | 2020-06-30 | 2021-01-29 | 深圳市智影医疗科技有限公司 | Apparatus and method for judging pulmonary tuberculosis based on multimodal fusion |
| CN112330674A (en) * | 2020-05-07 | 2021-02-05 | 南京信息工程大学 | Self-adaptive variable-scale convolution kernel method based on brain MRI (magnetic resonance imaging) three-dimensional image confidence |
| CN112348040A (en) * | 2019-08-07 | 2021-02-09 | 杭州海康威视数字技术股份有限公司 | Model training method, device and equipment |
| CN112541941A (en) * | 2020-12-07 | 2021-03-23 | 明峰医疗系统股份有限公司 | Scanning flow decision method and system based on CT locating sheet |
| CN112734790A (en) * | 2020-12-30 | 2021-04-30 | 武汉联影生命科学仪器有限公司 | Tumor region labeling method, system, device and readable storage medium |
| WO2021081759A1 (en) * | 2019-10-29 | 2021-05-06 | 中国科学院深圳先进技术研究院 | Collaborative imaging method and apparatus, storage medium, and collaborative imaging device |
| CN112802036A (en) * | 2021-03-16 | 2021-05-14 | 上海联影医疗科技股份有限公司 | Method, system and device for segmenting target area of three-dimensional medical image |
| CN112862822A (en) * | 2021-04-06 | 2021-05-28 | 华侨大学 | Ultrasonic breast tumor detection and classification method, device and medium |
| CN113077433A (en) * | 2021-03-30 | 2021-07-06 | 山东英信计算机技术有限公司 | Deep learning-based tumor target area cloud detection device, system, method and medium |
| CN113192053A (en) * | 2021-05-18 | 2021-07-30 | 北京大学第三医院(北京大学第三临床医学院) | Cervical tumor target area intelligent delineation method, equipment and medium based on deep learning |
| CN113222038A (en) * | 2021-05-24 | 2021-08-06 | 北京安德医智科技有限公司 | Breast lesion classification and positioning method and device based on nuclear magnetic image |
| CN113269734A (en) * | 2021-05-14 | 2021-08-17 | 成都市第三人民医院 | Tumor image detection method and device based on meta-learning feature fusion strategy |
| CN113302627A (en) * | 2018-11-16 | 2021-08-24 | 波尔多大学 | Method for determining a stereotactic brain target |
| CN113487579A (en) * | 2021-07-14 | 2021-10-08 | 广州柏视医疗科技有限公司 | Multi-mode migration method for automatically sketching model |
| CN113539402A (en) * | 2021-07-14 | 2021-10-22 | 广州柏视医疗科技有限公司 | Multi-mode image automatic sketching model migration method |
| CN113628325A (en) * | 2021-08-10 | 2021-11-09 | 海盐县南北湖医学人工智能研究院 | Small organ tumor evolution model establishing method and computer readable storage medium |
| CN113643255A (en) * | 2021-08-11 | 2021-11-12 | 锐视智慧(北京)医疗科技有限公司 | Method and system for sketching organs at risk based on deep learning |
| CN113643222A (en) * | 2020-04-23 | 2021-11-12 | 上海联影智能医疗科技有限公司 | Multi-modal image analysis method, computer device and storage medium |
| CN113744272A (en) * | 2021-11-08 | 2021-12-03 | 四川大学 | Automatic cerebral artery delineation method based on deep neural network |
| CN113902724A (en) * | 2021-10-18 | 2022-01-07 | 广州医科大学附属肿瘤医院 | Method, device, equipment and storage medium for classifying tumor cell images |
| CN114040707A (en) * | 2019-07-04 | 2022-02-11 | 帕拉梅维亚私人有限公司 | Diagnosis support program |
| CN114072845A (en) * | 2019-06-06 | 2022-02-18 | 医科达有限公司 | SCT image generation using cycleGAN with deformable layers |
| CN114266774A (en) * | 2022-03-03 | 2022-04-01 | 中日友好医院(中日友好临床医学研究所) | Method, equipment and system for diagnosing pulmonary embolism based on flat-scan CT image |
| CN114463246A (en) * | 2020-11-06 | 2022-05-10 | 广达电脑股份有限公司 | Circle selection system and circle selection method |
| CN114612478A (en) * | 2022-03-21 | 2022-06-10 | 华南理工大学 | Female pelvic cavity MRI automatic delineation system based on deep learning |
| JP2022530413A (en) * | 2019-09-26 | 2022-06-29 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | Image processing methods and equipment, electronic devices, storage media and computer programs |
| CN114692985A (en) * | 2022-04-08 | 2022-07-01 | 上海柯林布瑞信息技术有限公司 | Diagnosis process optimization analysis method and device based on diagnosis and treatment nodes and electronic equipment |
| CN114973218A (en) * | 2021-02-24 | 2022-08-30 | 阿里巴巴集团控股有限公司 | Image processing method, device and system |
| CN115206146A (en) * | 2021-04-14 | 2022-10-18 | 北京医智影科技有限公司 | Intelligent teaching method, system, equipment and medium for delineating radiotherapy target area |
| CN115409739A (en) * | 2022-10-31 | 2022-11-29 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Method and system for automatically sketching organs at risk |
| WO2022247218A1 (en) * | 2021-05-27 | 2022-12-01 | 广州柏视医疗科技有限公司 | Image registration method based on automatic delineation |
| CN115568944A (en) * | 2022-11-21 | 2023-01-06 | 吉林省英华恒瑞生物科技有限公司 | Analog ablation method and system for tumor therapeutic apparatus |
| TWI790788B (en) * | 2020-10-23 | 2023-01-21 | 國立臺灣大學 | Medical image analyzing system and method thereof |
| CN116245831A (en) * | 2023-02-13 | 2023-06-09 | 天津市鹰泰利安康医疗科技有限责任公司 | Tumor treatment auxiliary method and system based on bimodal imaging |
| CN116258671A (en) * | 2022-12-26 | 2023-06-13 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | A method, system, device and storage medium for intelligent delineation based on MR images |
| CN116342859A (en) * | 2023-05-30 | 2023-06-27 | 安徽医科大学第一附属医院 | Method and system for identifying lung tumor area based on imaging features |
| CN116344001A (en) * | 2023-03-10 | 2023-06-27 | 中南大学湘雅三医院 | Medical information visual management system and method based on artificial intelligence |
| WO2023143625A1 (en) * | 2022-01-31 | 2023-08-03 | Conova Medical Technology Limited | Process and system for three-dimensional modelling of tissue of a subject, and surgical planning process and system |
| CN116862789A (en) * | 2023-06-29 | 2023-10-10 | 广州沙艾生物科技有限公司 | PET-MR image correction method |
| CN116934676A (en) * | 2022-04-22 | 2023-10-24 | 西门子医疗有限公司 | Presentation outcome learning for therapy response prediction of risk organ and total tumor volume |
| CN116993755A (en) * | 2023-05-20 | 2023-11-03 | 张瑞霞 | A multi-modal fusion medical image segmentation method |
| CN117152442A (en) * | 2023-10-27 | 2023-12-01 | 吉林大学 | Automatic image target area sketching method and device, electronic equipment and readable storage medium |
| CN117351489A (en) * | 2023-12-06 | 2024-01-05 | 四川省肿瘤医院 | Head and neck tumor target area delineating system for whole-body PET/CT scanning |
| CN117476219A (en) * | 2023-12-27 | 2024-01-30 | 四川省肿瘤医院 | Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis |
| US11964170B2 (en) | 2019-08-29 | 2024-04-23 | Beijing Linking Medical Technology Co., Ltd. | Standardized artificial intelligence automatic radiation therapy planning method and system |
| CN117974631A (en) * | 2024-03-27 | 2024-05-03 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Method, system and medium for identifying lung tumor lesion characteristics based on CT image |
| CN118217551A (en) * | 2024-05-22 | 2024-06-21 | 四川省肿瘤医院 | Image-guided radiotherapy positioning system |
| CN118537699A (en) * | 2024-07-03 | 2024-08-23 | 青岛杰圣博生物科技有限公司 | Multi-mode oral cavity image data fusion and processing method |
| CN118781456A (en) * | 2024-06-11 | 2024-10-15 | 北京凯普顿医药科技开发有限公司 | A multimodal brain tumor image analysis system |
| CN119027504A (en) * | 2024-08-14 | 2024-11-26 | 中国人民解放军空军军医大学 | A tumor localization method and system for multimodal imaging |
| CN119028539A (en) * | 2024-01-16 | 2024-11-26 | 华硼中子科技(杭州)有限公司 | A segmented prompting method for intelligent delineation of target area in boron neutron capture therapy |
| WO2024242383A1 (en) * | 2023-05-25 | 2024-11-28 | 주식회사 딥바이오 | Tumor volume estimation method and computing system performing same |
| CN119337166A (en) * | 2024-12-20 | 2025-01-21 | 中南大学 | Three-branch diagnosis and evaluation method and device based on multimodal telemedicine data |
| CN119832033A (en) * | 2024-12-16 | 2025-04-15 | 同济大学 | Method for extracting and processing PET/CT (positron emission tomography/computed tomography) and radiotherapy dose images of lung tumor target area |
| CN119919723A (en) * | 2024-12-31 | 2025-05-02 | 河北大学附属医院 | A tumor image classification and recognition method, device and equipment |
| CN119991661A (en) * | 2025-04-14 | 2025-05-13 | 山东第二医科大学 | A method to improve the accuracy of tumor radiotherapy target delineation |
| WO2025128021A1 (en) * | 2023-12-12 | 2025-06-19 | Turkcell Teknoloji Arastirma Ve Gelistirme Anonim Sirketi | A system for diagnosing brain tumor via artificial intelligence |
| CN120771464A (en) * | 2025-09-10 | 2025-10-14 | 四川省肿瘤医院 | Space division radiotherapy guiding method and system based on target area immune region activation |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1801214A (en) * | 2005-11-18 | 2006-07-12 | 厦门大学 | Apparatus and method for processing tumor image information based on digital virtual organ |
| CN106920234A (en) * | 2017-02-27 | 2017-07-04 | 北京连心医疗科技有限公司 | A kind of method of the automatic radiotherapy planning of combined type |
-
2017
- 2017-08-11 CN CN201710687331.0A patent/CN107403201A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1801214A (en) * | 2005-11-18 | 2006-07-12 | 厦门大学 | Apparatus and method for processing tumor image information based on digital virtual organ |
| CN106920234A (en) * | 2017-02-27 | 2017-07-04 | 北京连心医疗科技有限公司 | A kind of method of the automatic radiotherapy planning of combined type |
Non-Patent Citations (2)
| Title |
|---|
| 刘国才 等: "头颈部肿瘤PET图像分割随机游走方法", 《湖南大学学报(自然科学版)》 * |
| 胡泽田: "头颈癌PET/MRI纹理分析与靶区勾画", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 * |
Cited By (229)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108170712A (en) * | 2017-11-29 | 2018-06-15 | 浙江大学 | A kind of method using the multi-media network study maximum boundary multi-media network expression comprising social geography information |
| CN108170712B (en) * | 2017-11-29 | 2021-08-10 | 浙江大学 | Method for learning maximum boundary multimedia network expression by using multimedia network containing social geographic information |
| CN107910061A (en) * | 2017-12-01 | 2018-04-13 | 中南大学 | A kind of medical data processing method and system |
| CN108257134A (en) * | 2017-12-21 | 2018-07-06 | 深圳大学 | Nasopharyngeal Carcinoma Lesions automatic division method and system based on deep learning |
| CN108257134B (en) * | 2017-12-21 | 2022-08-23 | 深圳大学 | Nasopharyngeal carcinoma focus automatic segmentation method and system based on deep learning |
| CN108229658A (en) * | 2017-12-27 | 2018-06-29 | 深圳先进技术研究院 | The implementation method and device of object detector based on finite sample |
| CN108229658B (en) * | 2017-12-27 | 2020-06-12 | 深圳先进技术研究院 | Method and device for realizing object detector based on limited samples |
| CN108109152A (en) * | 2018-01-03 | 2018-06-01 | 深圳北航新兴产业技术研究院 | Medical Images Classification and dividing method and device |
| CN108492309B (en) * | 2018-01-21 | 2022-03-04 | 西安电子科技大学 | Vein and Vessel Segmentation in Magnetic Resonance Image Based on Transfer Convolutional Neural Network |
| CN108492309A (en) * | 2018-01-21 | 2018-09-04 | 西安电子科技大学 | Magnetic resonance image medium sized vein blood vessel segmentation method based on migration convolutional neural networks |
| CN108198181A (en) * | 2018-01-23 | 2018-06-22 | 电子科技大学 | A kind of thermal-induced imagery processing method based on region segmentation and image co-registration |
| CN108288496A (en) * | 2018-01-26 | 2018-07-17 | 中国人民解放军总医院 | Tumor volume intelligence delineation method and device |
| CN108491856A (en) * | 2018-02-08 | 2018-09-04 | 西安电子科技大学 | A kind of image scene classification method based on Analysis On Multi-scale Features convolutional neural networks |
| CN108491856B (en) * | 2018-02-08 | 2022-02-18 | 西安电子科技大学 | Image scene classification method based on multi-scale feature convolutional neural network |
| CN108447551A (en) * | 2018-02-09 | 2018-08-24 | 北京连心医疗科技有限公司 | A kind of automatic delineation method in target area based on deep learning, equipment and storage medium |
| CN108537773A (en) * | 2018-02-11 | 2018-09-14 | 中国科学院苏州生物医学工程技术研究所 | Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease |
| CN108537773B (en) * | 2018-02-11 | 2022-06-17 | 中国科学院苏州生物医学工程技术研究所 | A method for intelligently assisted identification of pancreatic cancer and pancreatic inflammatory diseases |
| CN108492873A (en) * | 2018-03-13 | 2018-09-04 | 山东大学 | A kind of knowledge migration learning method for auxiliary diagnosis Alzheimer's disease |
| CN108492873B (en) * | 2018-03-13 | 2021-03-16 | 山东大学 | Knowledge transfer learning method for assisting in diagnosing Alzheimer's disease |
| CN108564567A (en) * | 2018-03-15 | 2018-09-21 | 中山大学 | A kind of ultrahigh resolution pathological image cancerous region method for visualizing |
| CN110322426A (en) * | 2018-03-28 | 2019-10-11 | 北京连心医疗科技有限公司 | Tumor target delineation method, equipment and storage medium based on variable manikin |
| CN110322426B (en) * | 2018-03-28 | 2022-05-10 | 北京连心医疗科技有限公司 | Method, device and storage medium for delineating tumor target area based on variable human body model |
| CN110390660A (en) * | 2018-04-16 | 2019-10-29 | 北京连心医疗科技有限公司 | A kind of medical image jeopardizes organ automatic classification method, equipment and storage medium |
| CN108629772B (en) * | 2018-05-08 | 2023-10-03 | 上海商汤智能科技有限公司 | Image processing method and device, computer equipment and computer storage medium |
| CN108629772A (en) * | 2018-05-08 | 2018-10-09 | 上海商汤智能科技有限公司 | Image processing method and device, computer equipment and computer storage media |
| CN108734210A (en) * | 2018-05-17 | 2018-11-02 | 浙江工业大学 | A kind of method for checking object based on cross-module state multi-scale feature fusion |
| CN108815721A (en) * | 2018-05-18 | 2018-11-16 | 山东省肿瘤防治研究院(山东省肿瘤医院) | A kind of exposure dose determines method and system |
| US20190350550A1 (en) * | 2018-05-18 | 2019-11-21 | Shandong Cancer Hospital and Institute | Method and system for determining irradiation dose |
| US11219426B2 (en) * | 2018-05-18 | 2022-01-11 | Jinming Yu | Method and system for determining irradiation dose |
| CN108898608A (en) * | 2018-05-28 | 2018-11-27 | 广东技术师范学院 | A kind of prostate ultrasonic image division method and equipment |
| CN108898557B (en) * | 2018-05-30 | 2021-08-06 | 商汤集团有限公司 | Image restoration method and apparatus, electronic device, computer program and storage medium |
| CN108898557A (en) * | 2018-05-30 | 2018-11-27 | 商汤集团有限公司 | Image restoration method and apparatus, electronic device, computer program, and storage medium |
| CN108648193A (en) * | 2018-06-06 | 2018-10-12 | 南方医科大学 | Biological tissue's image knows method for distinguishing and its system, computer storage media |
| CN108648193B (en) * | 2018-06-06 | 2023-10-31 | 南方医科大学 | Biological tissue image identification method and system and computer storage medium thereof |
| CN108765417A (en) * | 2018-06-15 | 2018-11-06 | 西安邮电大学 | It is a kind of that system and method is generated based on the femur X-ray film of deep learning and digital reconstruction irradiation image |
| CN108765417B (en) * | 2018-06-15 | 2021-11-05 | 西安邮电大学 | A system and method for femoral X-ray film generation based on deep learning and digital reconstruction of radiological images |
| CN109035160A (en) * | 2018-06-29 | 2018-12-18 | 哈尔滨商业大学 | The fusion method of medical image and the image detecting method learnt based on fusion medical image |
| CN109035160B (en) * | 2018-06-29 | 2022-06-21 | 哈尔滨商业大学 | Fusion method of medical image and image detection method based on fusion medical image learning |
| CN109146845A (en) * | 2018-07-16 | 2019-01-04 | 中南大学 | Head image sign point detecting method based on convolutional neural networks |
| CN109003269A (en) * | 2018-07-19 | 2018-12-14 | 哈尔滨工业大学 | A kind of mark extracting method for the medical image lesion that can improve doctor's efficiency |
| CN109003269B (en) * | 2018-07-19 | 2021-10-08 | 哈尔滨工业大学 | An annotation extraction method for medical imaging lesions that can improve doctor efficiency |
| CN110827961A (en) * | 2018-08-10 | 2020-02-21 | 北京连心医疗科技有限公司 | Automatic delineation method, device and storage medium for adaptive radiotherapy structure |
| CN110827961B (en) * | 2018-08-10 | 2022-08-23 | 北京连心医疗科技有限公司 | Automatic delineation method, device and storage medium for adaptive radiotherapy structure |
| CN109378068B (en) * | 2018-08-21 | 2022-02-18 | 深圳大学 | Automatic evaluation method and system for curative effect of nasopharyngeal carcinoma |
| CN109378068A (en) * | 2018-08-21 | 2019-02-22 | 深圳大学 | A method and system for automatic evaluation of curative effect of nasopharyngeal carcinoma |
| CN109146899A (en) * | 2018-08-28 | 2019-01-04 | 众安信息技术服务有限公司 | CT image jeopardizes organ segmentation method and device |
| CN109300136A (en) * | 2018-08-28 | 2019-02-01 | 众安信息技术服务有限公司 | An automatic segmentation method of organs at risk based on convolutional neural network |
| CN109300136B (en) * | 2018-08-28 | 2021-08-31 | 众安信息技术服务有限公司 | An automatic segmentation method of organs at risk based on convolutional neural network |
| CN110766693A (en) * | 2018-09-06 | 2020-02-07 | 北京连心医疗科技有限公司 | Method for jointly predicting radiotherapy structure position based on multi-model neural network |
| CN110766693B (en) * | 2018-09-06 | 2022-06-21 | 北京连心医疗科技有限公司 | Method for jointly predicting radiotherapy structure position based on multi-model neural network |
| CN111091524A (en) * | 2018-10-08 | 2020-05-01 | 天津工业大学 | A segmentation method of prostate transrectal ultrasound images based on deep convolutional neural network |
| CN109461161A (en) * | 2018-10-22 | 2019-03-12 | 北京连心医疗科技有限公司 | A method of human organ in medical image is split based on neural network |
| CN109523584A (en) * | 2018-10-26 | 2019-03-26 | 上海联影医疗科技有限公司 | Image processing method, device, multi-mode imaging system, storage medium and equipment |
| CN109523584B (en) * | 2018-10-26 | 2021-04-20 | 上海联影医疗科技股份有限公司 | Image processing method and device, multi-modality imaging system, storage medium and equipment |
| CN109615642B (en) * | 2018-11-05 | 2021-04-06 | 北京全域医疗技术集团有限公司 | Automatic organ-at-risk delineation method and device in radiotherapy plan |
| CN109615642A (en) * | 2018-11-05 | 2019-04-12 | 北京全域医疗技术有限公司 | Jeopardize the automatic delineation method of organ and device in a kind of radiotherapy planning |
| CN111161326A (en) * | 2018-11-08 | 2020-05-15 | 通用电气公司 | System and method for unsupervised deep learning for deformable image registration |
| US11880972B2 (en) | 2018-11-08 | 2024-01-23 | Tencent Technology (Shenzhen) Company Limited | Tissue nodule detection and tissue nodule detection model training method, apparatus, device, and system |
| CN109523526A (en) * | 2018-11-08 | 2019-03-26 | 腾讯科技(深圳)有限公司 | Organize nodule detection and its model training method, device, equipment and system |
| CN109523526B (en) * | 2018-11-08 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Tissue nodule detection and its model training method, device, equipment and system |
| CN111161326B (en) * | 2018-11-08 | 2023-09-08 | 通用电气公司 | Systems and methods for unsupervised deep learning for deformable image registration |
| US12346998B2 (en) | 2018-11-13 | 2025-07-01 | Koninklijke Philips N.V. | Artificial intelligence (AI)-based standardized uptake value (SUV) correction and variation assessment for positron emission tomography (PET) |
| CN113196340A (en) * | 2018-11-13 | 2021-07-30 | 皇家飞利浦有限公司 | Artificial Intelligence (AI) -based Standardized Uptake Value (SUV) correction and variance assessment for Positron Emission Tomography (PET) |
| WO2020099250A1 (en) * | 2018-11-13 | 2020-05-22 | Koninklijke Philips N.V. | Artificial intelligence (ai)-based standardized uptake value (suv) correction and variation assessment for positron emission tomography (pet) |
| CN113302627A (en) * | 2018-11-16 | 2021-08-24 | 波尔多大学 | Method for determining a stereotactic brain target |
| CN109559303A (en) * | 2018-11-22 | 2019-04-02 | 广州达美智能科技有限公司 | Recognition methods, device and the computer readable storage medium of calcification point |
| CN109330616A (en) * | 2018-12-08 | 2019-02-15 | 宁波耀通管阀科技有限公司 | Tumour hazard rating discrimination system |
| CN109378054A (en) * | 2018-12-13 | 2019-02-22 | 山西医科大学第医院 | A multi-modal image aided diagnosis system and its construction method |
| CN109754860A (en) * | 2018-12-21 | 2019-05-14 | 苏州雷泰医疗科技有限公司 | A Method of Automatically Predicting Planning Difficulty Based on DVH Prediction Model |
| CN109754860B (en) * | 2018-12-21 | 2020-11-10 | 苏州雷泰医疗科技有限公司 | Method for automatically predicting planning difficulty level based on DVH prediction model |
| CN109934796A (en) * | 2018-12-26 | 2019-06-25 | 苏州雷泰医疗科技有限公司 | A kind of automatic delineation method of organ based on Deep integrating study |
| CN109727235A (en) * | 2018-12-26 | 2019-05-07 | 苏州雷泰医疗科技有限公司 | A kind of automatic delineation algorithms of organ based on deep learning |
| CN109727235B (en) * | 2018-12-26 | 2021-05-11 | 苏州雷泰医疗科技有限公司 | Organ automatic delineation algorithm based on deep learning |
| CN111374690A (en) * | 2018-12-28 | 2020-07-07 | 通用电气公司 | Medical imaging method and system |
| CN109785306A (en) * | 2019-01-09 | 2019-05-21 | 上海联影医疗科技有限公司 | Organ delineation method, device, computer equipment and storage medium |
| CN109858428B (en) * | 2019-01-28 | 2021-08-17 | 四川大学 | Automatic identification method of ANA fluorescent sheet based on machine learning and deep learning |
| CN109858428A (en) * | 2019-01-28 | 2019-06-07 | 四川大学 | ANA flourescent sheet automatic identifying method based on machine learning and deep learning |
| CN109903272B (en) * | 2019-01-30 | 2021-09-03 | 西安天伟电子系统工程有限公司 | Target detection method, device, equipment, computer equipment and storage medium |
| CN109903272A (en) * | 2019-01-30 | 2019-06-18 | 西安天伟电子系统工程有限公司 | Object detection method, device, equipment, computer equipment and storage medium |
| CN111613300A (en) * | 2019-02-22 | 2020-09-01 | 未艾医疗技术(深圳)有限公司 | Tumor and blood vessel Ai processing methods and products based on VRDS 4D medical images |
| CN111613300B (en) * | 2019-02-22 | 2023-09-15 | 曹生 | Tumor and blood vessel Ai processing methods and products based on VRDS 4D medical imaging |
| CN109938764A (en) * | 2019-02-28 | 2019-06-28 | 佛山原子医疗设备有限公司 | A kind of adaptive multiple location scan imaging method and its system based on deep learning |
| CN109938764B (en) * | 2019-02-28 | 2021-05-18 | 佛山原子医疗设备有限公司 | Self-adaptive multi-part scanning imaging method and system based on deep learning |
| CN109948667A (en) * | 2019-03-01 | 2019-06-28 | 桂林电子科技大学 | Image classification method and device for prediction of distant metastasis of head and neck cancer |
| CN110047080A (en) * | 2019-03-12 | 2019-07-23 | 天津大学 | A method of the multi-modal brain tumor image fine segmentation based on V-Net |
| CN109949288A (en) * | 2019-03-15 | 2019-06-28 | 上海联影智能医疗科技有限公司 | Tumor type determines system, method and storage medium |
| CN109978852A (en) * | 2019-03-22 | 2019-07-05 | 邃蓝智能科技(上海)有限公司 | The radiotherapy image Target delineations method and system of microtissue organ based on deep learning |
| CN109978852B (en) * | 2019-03-22 | 2022-08-16 | 邃蓝智能科技(上海)有限公司 | Deep learning-based radiotherapy image target region delineation method and system for micro tissue organ |
| CN110070546A (en) * | 2019-04-18 | 2019-07-30 | 山东师范大学 | A kind of multiple target based on deep learning jeopardizes the automatic division method of organ, apparatus and system |
| CN110070546B (en) * | 2019-04-18 | 2021-08-27 | 山东师范大学 | Automatic multi-target organ-at-risk segmentation method, device and system based on deep learning |
| CN110060300A (en) * | 2019-04-28 | 2019-07-26 | 中国科学技术大学 | A kind of CT image relative position prediction technique and system |
| US20200364910A1 (en) * | 2019-05-13 | 2020-11-19 | Adobe Inc. | Line drawing generation |
| US10922860B2 (en) * | 2019-05-13 | 2021-02-16 | Adobe Inc. | Line drawing generation |
| CN110136840A (en) * | 2019-05-17 | 2019-08-16 | 山东管理学院 | A medical image classification method, device and computer-readable storage medium based on self-weighted hierarchical biological features |
| CN114072845A (en) * | 2019-06-06 | 2022-02-18 | 医科达有限公司 | SCT image generation using cycleGAN with deformable layers |
| CN110211139A (en) * | 2019-06-12 | 2019-09-06 | 安徽大学 | Automatic segmentation Radiotherapy of Esophageal Cancer target area and the method and system for jeopardizing organ |
| CN110232695A (en) * | 2019-06-18 | 2019-09-13 | 山东师范大学 | Left ventricle image partition method and system based on hybrid mode image |
| WO2020224123A1 (en) * | 2019-06-24 | 2020-11-12 | 浙江大学 | Deep learning-based seizure focus three-dimensional automatic positioning system |
| CN110390351A (en) * | 2019-06-24 | 2019-10-29 | 浙江大学 | A kind of Epileptic focus three-dimensional automatic station-keeping system based on deep learning |
| US11645748B2 (en) | 2019-06-24 | 2023-05-09 | Zhejiang University | Three-dimensional automatic location system for epileptogenic focus based on deep learning |
| CN110264479A (en) * | 2019-06-25 | 2019-09-20 | 南京景三医疗科技有限公司 | Three-dimensional image segmentation method based on random walk and level set |
| CN110264479B (en) * | 2019-06-25 | 2023-03-24 | 南京景三医疗科技有限公司 | Three-dimensional image segmentation method based on random walk and level set |
| CN110415184A (en) * | 2019-06-28 | 2019-11-05 | 南开大学 | A Multimodal Image Enhancement Method Based on Orthogonal Metaspace |
| CN110415184B (en) * | 2019-06-28 | 2022-12-20 | 南开大学 | A Multimodal Image Enhancement Method Based on Orthogonal Metaspace |
| CN114040707A (en) * | 2019-07-04 | 2022-02-11 | 帕拉梅维亚私人有限公司 | Diagnosis support program |
| CN110378881A (en) * | 2019-07-05 | 2019-10-25 | 北京航空航天大学 | A kind of tumor-localizing system based on deep learning |
| CN110415239A (en) * | 2019-08-01 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment, medical treatment electronic equipment and medium |
| CN110415239B (en) * | 2019-08-01 | 2022-12-16 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, medical electronic device, and medium |
| CN112348040B (en) * | 2019-08-07 | 2023-08-29 | 杭州海康威视数字技术股份有限公司 | Model training method, device and equipment |
| CN112348040A (en) * | 2019-08-07 | 2021-02-09 | 杭州海康威视数字技术股份有限公司 | Model training method, device and equipment |
| CN110473196A (en) * | 2019-08-14 | 2019-11-19 | 中南大学 | A kind of abdominal CT images target organ method for registering based on deep learning |
| CN110689521B (en) * | 2019-08-15 | 2022-07-29 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Automatic identification method and system for human body part to which medical image belongs |
| CN110610527B (en) * | 2019-08-15 | 2023-09-22 | 苏州瑞派宁科技有限公司 | SUV computing method, device, equipment, system and computer storage medium |
| CN110610527A (en) * | 2019-08-15 | 2019-12-24 | 苏州瑞派宁科技有限公司 | SUV calculation method, device, equipment, system and computer storage medium |
| CN110689521A (en) * | 2019-08-15 | 2020-01-14 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Automatic identification method and system for human body part to which medical image belongs |
| CN110415785A (en) * | 2019-08-29 | 2019-11-05 | 北京连心医疗科技有限公司 | The method and system of artificial intelligence guidance radiotherapy planning |
| US11964170B2 (en) | 2019-08-29 | 2024-04-23 | Beijing Linking Medical Technology Co., Ltd. | Standardized artificial intelligence automatic radiation therapy planning method and system |
| CN110706200A (en) * | 2019-09-02 | 2020-01-17 | 杭州深睿博联科技有限公司 | Data prediction method and device |
| CN110706200B (en) * | 2019-09-02 | 2022-08-05 | 杭州深睿博联科技有限公司 | Method and device for data prediction |
| CN110706749A (en) * | 2019-09-10 | 2020-01-17 | 至本医疗科技(上海)有限公司 | Cancer type prediction system and method based on tissue and organ differentiation hierarchical relation |
| CN110706749B (en) * | 2019-09-10 | 2022-06-10 | 至本医疗科技(上海)有限公司 | Cancer type prediction system and method based on tissue and organ differentiation hierarchical relation |
| JP2022530413A (en) * | 2019-09-26 | 2022-06-29 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | Image processing methods and equipment, electronic devices, storage media and computer programs |
| CN110739049A (en) * | 2019-10-10 | 2020-01-31 | 上海联影智能医疗科技有限公司 | Image sketching method and device, storage medium and computer equipment |
| WO2021081759A1 (en) * | 2019-10-29 | 2021-05-06 | 中国科学院深圳先进技术研究院 | Collaborative imaging method and apparatus, storage medium, and collaborative imaging device |
| CN111047594B (en) * | 2019-11-06 | 2023-04-07 | 安徽医科大学 | Tumor MRI weak supervised learning analysis modeling method and model thereof |
| CN111047594A (en) * | 2019-11-06 | 2020-04-21 | 安徽医科大学 | Tumor MRI weak supervised learning analysis modeling method and model thereof |
| CN110930421A (en) * | 2019-11-22 | 2020-03-27 | 电子科技大学 | A Segmentation Method for CBCT Tooth Image |
| CN110930421B (en) * | 2019-11-22 | 2022-03-29 | 电子科技大学 | Segmentation method for CBCT (Cone Beam computed tomography) tooth image |
| CN110991458A (en) * | 2019-11-25 | 2020-04-10 | 创新奇智(北京)科技有限公司 | Artificial intelligence recognition result sampling system and sampling method based on image characteristics |
| CN110992406B (en) * | 2019-12-10 | 2024-04-30 | 张家港赛提菲克医疗器械有限公司 | Radiotherapy patient positioning rigid body registration algorithm based on region of interest |
| CN110992406A (en) * | 2019-12-10 | 2020-04-10 | 张家港赛提菲克医疗器械有限公司 | Radiotherapy patient positioning rigid body registration algorithm based on region of interest |
| CN111127444A (en) * | 2019-12-26 | 2020-05-08 | 广州柏视医疗科技有限公司 | Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network |
| CN111127444B (en) * | 2019-12-26 | 2021-06-04 | 广州柏视医疗科技有限公司 | A method for automatic identification of organs at risk in radiotherapy in CT images based on deep semantic network |
| CN111312373B (en) * | 2020-01-19 | 2023-08-18 | 浙江树人学院(浙江树人大学) | Automatic labeling method for PET/CT image fusion |
| CN111312373A (en) * | 2020-01-19 | 2020-06-19 | 浙江树人学院(浙江树人大学) | An automatic labeling method for PET/CT image fusion |
| CN111340135B (en) * | 2020-03-12 | 2021-07-23 | 甄鑫 | Classification of small renal masses based on random projection |
| CN111340135A (en) * | 2020-03-12 | 2020-06-26 | 广州领拓医疗科技有限公司 | Classification of small renal masses based on random projection |
| CN111420271A (en) * | 2020-04-02 | 2020-07-17 | 河北普尼医疗科技有限公司 | Electrode patch positioning method based on head tumor treatment |
| CN111477298A (en) * | 2020-04-03 | 2020-07-31 | 北京易康医疗科技有限公司 | A tracking method for tumor location changes during radiotherapy |
| CN111477298B (en) * | 2020-04-03 | 2021-06-15 | 山东省肿瘤防治研究院(山东省肿瘤医院) | Method for tracking tumor position change in radiotherapy process |
| CN113643222A (en) * | 2020-04-23 | 2021-11-12 | 上海联影智能医疗科技有限公司 | Multi-modal image analysis method, computer device and storage medium |
| CN112330674A (en) * | 2020-05-07 | 2021-02-05 | 南京信息工程大学 | Self-adaptive variable-scale convolution kernel method based on brain MRI (magnetic resonance imaging) three-dimensional image confidence |
| CN112330674B (en) * | 2020-05-07 | 2023-06-30 | 南京信息工程大学 | An Adaptive Variable-Scale Convolution Kernel Method Based on the Confidence of 3D Brain MRI Images |
| CN111599445A (en) * | 2020-05-14 | 2020-08-28 | 安徽慧软科技有限公司 | Full-automatic CT image processing system for automatic drawing |
| CN111598883A (en) * | 2020-05-20 | 2020-08-28 | 重庆工程职业技术学院 | Calibration label equipment for acquiring cloud data medical image and working method |
| CN111598883B (en) * | 2020-05-20 | 2023-05-26 | 重庆工程职业技术学院 | Calibration and labeling equipment and working method for obtaining cloud data medical images |
| CN111680758B (en) * | 2020-06-15 | 2024-03-05 | 杭州海康威视数字技术股份有限公司 | Image training sample generation method and device |
| CN111680758A (en) * | 2020-06-15 | 2020-09-18 | 杭州海康威视数字技术股份有限公司 | Image training sample generation method and device |
| CN112288683A (en) * | 2020-06-30 | 2021-01-29 | 深圳市智影医疗科技有限公司 | Apparatus and method for judging pulmonary tuberculosis based on multimodal fusion |
| CN111899850A (en) * | 2020-08-12 | 2020-11-06 | 上海依智医疗技术有限公司 | Medical image information processing method, display method and readable storage medium |
| CN112258457A (en) * | 2020-09-28 | 2021-01-22 | 汕头大学 | Multi-dimensional feature extraction method for full-volume three-dimensional ultrasonic image |
| CN112258457B (en) * | 2020-09-28 | 2023-09-05 | 汕头大学 | A multidimensional feature extraction method for full-volume 3D ultrasound images |
| CN112270660A (en) * | 2020-09-30 | 2021-01-26 | 四川大学 | Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network |
| TWI790788B (en) * | 2020-10-23 | 2023-01-21 | 國立臺灣大學 | Medical image analyzing system and method thereof |
| US12014813B2 (en) | 2020-11-06 | 2024-06-18 | Quanta Computer Inc. | Contouring system |
| CN114463246A (en) * | 2020-11-06 | 2022-05-10 | 广达电脑股份有限公司 | Circle selection system and circle selection method |
| CN112541941B (en) * | 2020-12-07 | 2023-12-15 | 明峰医疗系统股份有限公司 | Scanning flow decision method and system based on CT (computed tomography) positioning sheet |
| CN112541941A (en) * | 2020-12-07 | 2021-03-23 | 明峰医疗系统股份有限公司 | Scanning flow decision method and system based on CT locating sheet |
| CN112734790B (en) * | 2020-12-30 | 2023-07-11 | 武汉联影生命科学仪器有限公司 | A method, system, device, and readable storage medium for tumor region labeling |
| CN112734790A (en) * | 2020-12-30 | 2021-04-30 | 武汉联影生命科学仪器有限公司 | Tumor region labeling method, system, device and readable storage medium |
| CN114973218A (en) * | 2021-02-24 | 2022-08-30 | 阿里巴巴集团控股有限公司 | Image processing method, device and system |
| US12198379B2 (en) | 2021-03-16 | 2025-01-14 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for image segmentation |
| CN112802036A (en) * | 2021-03-16 | 2021-05-14 | 上海联影医疗科技股份有限公司 | Method, system and device for segmenting target area of three-dimensional medical image |
| CN113077433A (en) * | 2021-03-30 | 2021-07-06 | 山东英信计算机技术有限公司 | Deep learning-based tumor target area cloud detection device, system, method and medium |
| CN113077433B (en) * | 2021-03-30 | 2023-04-07 | 山东英信计算机技术有限公司 | Deep learning-based tumor target area cloud detection device, system, method and medium |
| CN112862822B (en) * | 2021-04-06 | 2023-05-30 | 华侨大学 | Method, device and medium for ultrasonic breast tumor detection and classification |
| CN112862822A (en) * | 2021-04-06 | 2021-05-28 | 华侨大学 | Ultrasonic breast tumor detection and classification method, device and medium |
| CN115206146B (en) * | 2021-04-14 | 2023-09-22 | 北京医智影科技有限公司 | Intelligent teaching method, system, equipment and medium for drawing radiotherapy target area |
| CN115206146A (en) * | 2021-04-14 | 2022-10-18 | 北京医智影科技有限公司 | Intelligent teaching method, system, equipment and medium for delineating radiotherapy target area |
| CN113269734A (en) * | 2021-05-14 | 2021-08-17 | 成都市第三人民医院 | Tumor image detection method and device based on meta-learning feature fusion strategy |
| CN113192053A (en) * | 2021-05-18 | 2021-07-30 | 北京大学第三医院(北京大学第三临床医学院) | Cervical tumor target area intelligent delineation method, equipment and medium based on deep learning |
| CN113222038B (en) * | 2021-05-24 | 2021-10-22 | 北京安德医智科技有限公司 | Breast lesion classification and positioning method and device based on nuclear magnetic image |
| CN113222038A (en) * | 2021-05-24 | 2021-08-06 | 北京安德医智科技有限公司 | Breast lesion classification and positioning method and device based on nuclear magnetic image |
| WO2022247218A1 (en) * | 2021-05-27 | 2022-12-01 | 广州柏视医疗科技有限公司 | Image registration method based on automatic delineation |
| CN113487579A (en) * | 2021-07-14 | 2021-10-08 | 广州柏视医疗科技有限公司 | Multi-mode migration method for automatically sketching model |
| CN113539402A (en) * | 2021-07-14 | 2021-10-22 | 广州柏视医疗科技有限公司 | Multi-mode image automatic sketching model migration method |
| CN113487579B (en) * | 2021-07-14 | 2022-04-01 | 广州柏视医疗科技有限公司 | Multi-mode migration method for automatically sketching model |
| CN113539402B (en) * | 2021-07-14 | 2022-04-01 | 广州柏视医疗科技有限公司 | Model transfer method for automatic delineation of multimodal images |
| CN113628325B (en) * | 2021-08-10 | 2024-03-26 | 海盐县南北湖医学人工智能研究院 | Model building method for small organ tumor evolution and computer readable storage medium |
| CN113628325A (en) * | 2021-08-10 | 2021-11-09 | 海盐县南北湖医学人工智能研究院 | Small organ tumor evolution model establishing method and computer readable storage medium |
| CN113643255B (en) * | 2021-08-11 | 2024-12-31 | 锐视医疗科技(苏州)有限公司 | Method and system for delineating organs at risk based on deep learning |
| CN113643255A (en) * | 2021-08-11 | 2021-11-12 | 锐视智慧(北京)医疗科技有限公司 | Method and system for sketching organs at risk based on deep learning |
| CN113902724A (en) * | 2021-10-18 | 2022-01-07 | 广州医科大学附属肿瘤医院 | Method, device, equipment and storage medium for classifying tumor cell images |
| CN113902724B (en) * | 2021-10-18 | 2022-07-01 | 广州医科大学附属肿瘤医院 | Tumor cell image classification method and device, equipment and storage medium |
| CN113744272A (en) * | 2021-11-08 | 2021-12-03 | 四川大学 | Automatic cerebral artery delineation method based on deep neural network |
| CN113744272B (en) * | 2021-11-08 | 2022-01-28 | 四川大学 | A method for automatic delineation of cerebral arteries based on deep neural network |
| WO2023143625A1 (en) * | 2022-01-31 | 2023-08-03 | Conova Medical Technology Limited | Process and system for three-dimensional modelling of tissue of a subject, and surgical planning process and system |
| CN114266774A (en) * | 2022-03-03 | 2022-04-01 | 中日友好医院(中日友好临床医学研究所) | Method, equipment and system for diagnosing pulmonary embolism based on flat-scan CT image |
| CN114612478B (en) * | 2022-03-21 | 2024-05-10 | 华南理工大学 | Female pelvic cavity MRI automatic sketching system based on deep learning |
| CN114612478A (en) * | 2022-03-21 | 2022-06-10 | 华南理工大学 | Female pelvic cavity MRI automatic delineation system based on deep learning |
| CN114692985A (en) * | 2022-04-08 | 2022-07-01 | 上海柯林布瑞信息技术有限公司 | Diagnosis process optimization analysis method and device based on diagnosis and treatment nodes and electronic equipment |
| CN116934676A (en) * | 2022-04-22 | 2023-10-24 | 西门子医疗有限公司 | Presentation outcome learning for therapy response prediction of risk organ and total tumor volume |
| CN115409739A (en) * | 2022-10-31 | 2022-11-29 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Method and system for automatically sketching organs at risk |
| CN115568944A (en) * | 2022-11-21 | 2023-01-06 | 吉林省英华恒瑞生物科技有限公司 | Analog ablation method and system for tumor therapeutic apparatus |
| CN115568944B (en) * | 2022-11-21 | 2023-02-24 | 吉林省英华恒瑞生物科技有限公司 | Analog ablation method and system for tumor therapeutic apparatus |
| CN116258671B (en) * | 2022-12-26 | 2023-08-29 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | A method, system, device and storage medium for intelligent delineation based on MR images |
| CN116258671A (en) * | 2022-12-26 | 2023-06-13 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | A method, system, device and storage medium for intelligent delineation based on MR images |
| CN116245831B (en) * | 2023-02-13 | 2024-01-16 | 天津市鹰泰利安康医疗科技有限责任公司 | Tumor treatment auxiliary method and system based on bimodal imaging |
| CN116245831A (en) * | 2023-02-13 | 2023-06-09 | 天津市鹰泰利安康医疗科技有限责任公司 | Tumor treatment auxiliary method and system based on bimodal imaging |
| CN116344001A (en) * | 2023-03-10 | 2023-06-27 | 中南大学湘雅三医院 | Medical information visual management system and method based on artificial intelligence |
| CN116344001B (en) * | 2023-03-10 | 2023-10-24 | 中南大学湘雅三医院 | Medical information visual management system and method based on artificial intelligence |
| CN116993755A (en) * | 2023-05-20 | 2023-11-03 | 张瑞霞 | A multi-modal fusion medical image segmentation method |
| WO2024242383A1 (en) * | 2023-05-25 | 2024-11-28 | 주식회사 딥바이오 | Tumor volume estimation method and computing system performing same |
| CN116342859A (en) * | 2023-05-30 | 2023-06-27 | 安徽医科大学第一附属医院 | Method and system for identifying lung tumor area based on imaging features |
| CN116342859B (en) * | 2023-05-30 | 2023-08-18 | 安徽医科大学第一附属医院 | A method and system for identifying lung tumor regions based on imaging features |
| CN116862789B (en) * | 2023-06-29 | 2024-04-23 | 广州沙艾生物科技有限公司 | PET-MR image correction method |
| CN116862789A (en) * | 2023-06-29 | 2023-10-10 | 广州沙艾生物科技有限公司 | PET-MR image correction method |
| CN117152442A (en) * | 2023-10-27 | 2023-12-01 | 吉林大学 | Automatic image target area sketching method and device, electronic equipment and readable storage medium |
| CN117152442B (en) * | 2023-10-27 | 2024-02-02 | 吉林大学 | Automatic image target area sketching method and device, electronic equipment and readable storage medium |
| CN117351489B (en) * | 2023-12-06 | 2024-03-08 | 四川省肿瘤医院 | Head and neck tumor target area delineating system for whole-body PET/CT scanning |
| CN117351489A (en) * | 2023-12-06 | 2024-01-05 | 四川省肿瘤医院 | Head and neck tumor target area delineating system for whole-body PET/CT scanning |
| WO2025128021A1 (en) * | 2023-12-12 | 2025-06-19 | Turkcell Teknoloji Arastirma Ve Gelistirme Anonim Sirketi | A system for diagnosing brain tumor via artificial intelligence |
| CN117476219B (en) * | 2023-12-27 | 2024-03-12 | 四川省肿瘤医院 | Auxiliary method and auxiliary system for locating CT tomographic images based on big data analysis |
| CN117476219A (en) * | 2023-12-27 | 2024-01-30 | 四川省肿瘤医院 | Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis |
| CN119028539A (en) * | 2024-01-16 | 2024-11-26 | 华硼中子科技(杭州)有限公司 | A segmented prompting method for intelligent delineation of target area in boron neutron capture therapy |
| CN117974631A (en) * | 2024-03-27 | 2024-05-03 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Method, system and medium for identifying lung tumor lesion characteristics based on CT image |
| CN118217551A (en) * | 2024-05-22 | 2024-06-21 | 四川省肿瘤医院 | Image-guided radiotherapy positioning system |
| CN118781456A (en) * | 2024-06-11 | 2024-10-15 | 北京凯普顿医药科技开发有限公司 | A multimodal brain tumor image analysis system |
| CN118537699B (en) * | 2024-07-03 | 2024-10-29 | 青岛杰圣博生物科技有限公司 | A method for fusion and processing of multimodal oral image data |
| CN118537699A (en) * | 2024-07-03 | 2024-08-23 | 青岛杰圣博生物科技有限公司 | Multi-mode oral cavity image data fusion and processing method |
| CN119027504A (en) * | 2024-08-14 | 2024-11-26 | 中国人民解放军空军军医大学 | A tumor localization method and system for multimodal imaging |
| CN119832033A (en) * | 2024-12-16 | 2025-04-15 | 同济大学 | Method for extracting and processing PET/CT (positron emission tomography/computed tomography) and radiotherapy dose images of lung tumor target area |
| CN119832033B (en) * | 2024-12-16 | 2025-10-10 | 同济大学 | A lung tumor target area PET/CT, radiotherapy dose image extraction and processing method |
| CN119337166A (en) * | 2024-12-20 | 2025-01-21 | 中南大学 | Three-branch diagnosis and evaluation method and device based on multimodal telemedicine data |
| CN119337166B (en) * | 2024-12-20 | 2025-03-18 | 中南大学 | Three-branch diagnosis and evaluation method and device based on multi-mode remote medical data |
| CN119919723A (en) * | 2024-12-31 | 2025-05-02 | 河北大学附属医院 | A tumor image classification and recognition method, device and equipment |
| CN119991661A (en) * | 2025-04-14 | 2025-05-13 | 山东第二医科大学 | A method to improve the accuracy of tumor radiotherapy target delineation |
| CN119991661B (en) * | 2025-04-14 | 2025-06-10 | 山东第二医科大学 | A method to improve the accuracy of tumor radiotherapy target delineation |
| CN120771464A (en) * | 2025-09-10 | 2025-10-14 | 四川省肿瘤医院 | Space division radiotherapy guiding method and system based on target area immune region activation |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107403201A (en) | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method | |
| Schreier et al. | Clinical evaluation of a full-image deep segmentation algorithm for the male pelvis on cone-beam CT and CT | |
| US7876938B2 (en) | System and method for whole body landmark detection, segmentation and change quantification in digital images | |
| Li et al. | Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation | |
| Jiang et al. | Medical image analysis with artificial neural networks | |
| Carvalho et al. | 3D segmentation algorithms for computerized tomographic imaging: a systematic literature review | |
| Göçeri et al. | A comparative performance evaluation of various approaches for liver segmentation from SPIR images | |
| Göçeri | Fully automated liver segmentation using Sobolev gradient‐based level set evolution | |
| Luo et al. | An optimized two-stage cascaded deep neural network for adrenal segmentation on CT images | |
| Campadelli et al. | A segmentation framework for abdominal organs from CT scans | |
| Sun et al. | Intracranial hemorrhage detection by 3D voxel segmentation on brain CT images | |
| Zhu et al. | Automatic delineation of the myocardial wall from CT images via shape segmentation and variational region growing | |
| CN106846330A (en) | Human liver's feature modeling and vascular pattern space normalizing method | |
| Liu et al. | Automatic segmentation algorithm of ultrasound heart image based on convolutional neural network and image saliency | |
| Tummala et al. | Liver tumor segmentation from computed tomography images using multiscale residual dilated encoder‐decoder network | |
| Lee et al. | Multi-contrast computed tomography healthy kidney atlas | |
| Yin et al. | Automatic breast tissue segmentation in MRIs with morphology snake and deep denoiser training via extended Stein’s unbiased risk estimator | |
| Bai et al. | Automatic whole heart segmentation based on watershed and active contour model in CT images | |
| Wieclawek | 3D marker-controlled watershed for kidney segmentation in clinical CT exams | |
| Zhu et al. | A complete system for automatic extraction of left ventricular myocardium from CT images using shape segmentation and contour evolution | |
| Sweetlin et al. | Patient–Specific Model Based Segmentation of Lung Computed Tomography Images | |
| Rastgarpour et al. | The status quo of artificial intelligence methods in automatic medical image segmentation | |
| Tan et al. | A segmentation method of lung parenchyma from chest CT images based on dual U-Net | |
| Zhao | Novel image processing and deep learning methods for head and neck cancer delineation from MRI data | |
| Balaji | Generative deep belief model for improved medical image segmentation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20171128 |
|
| WD01 | Invention patent application deemed withdrawn after publication |