CN109636813A - The dividing method and system of prostate magnetic resonance image - Google Patents
The dividing method and system of prostate magnetic resonance image Download PDFInfo
- Publication number
- CN109636813A CN109636813A CN201811538977.3A CN201811538977A CN109636813A CN 109636813 A CN109636813 A CN 109636813A CN 201811538977 A CN201811538977 A CN 201811538977A CN 109636813 A CN109636813 A CN 109636813A
- Authority
- CN
- China
- Prior art keywords
- label
- image
- magnetic resonance
- resonance image
- convolutional network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30081—Prostate
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the dividing methods and system of a kind of prostate magnetic resonance image, are related to field of medical image processing.Method includes the following steps: image is inputted full convolutional network in the training stage, corresponding output probability is obtained, calculates the cross entropy between output probability and label;Weight map is calculated according to image and label, cross entropy is multiplied in the way of pixel to pixel with weight map, final loss is obtained, adjusts the parameter of full convolutional network, the loss is made to reach minimum value;In the segmentation stage, prostate magnetic resonance image to be split is inputted into trained full convolutional network, obtains just segmentation result.The present invention can be realized divides intraprostatic central gland and outer region from magnetic resonance image automatically.
Description
Technical field
The present invention relates to field of medical image processing, be specifically related to a kind of prostate magnetic resonance image dividing method and
System.
Background technique
Prostatic disorders are very common in older male.Threaten male strong in particular, prostate cancer has become
The second largest cancer of health.In the U.S., about 1/6 man can obtain prostate cancer, and 1/36 man can die of this disease.In crowd
In more inspection methods, MRI (Magnetic Resonance Imaging, magnetic resonance imaging) has become prostate cancer inspection
Most effective means.
The anatomical tissue of prostate can be divided into central gland (Central Gland, CG) and outer region
(Peripheral Zone, PZ), about 70%~75% prostate cancer come from PZ, the cancer from PZ and the cancer from CG
Disease is different from image.It is right from MR image Accurate Segmentation prostate as an important step in treatment plan
The diagnosis of prostate cancer is most important.
Currently, prostate segmentation is had been manually done by doctor, the quality of segmentation depends primarily on the experience of doctor, divides by hand
It cuts time-consuming and subjective.Therefore, clinically it is badly in need of the fast partition method of prostate.
However, the automatic segmentation of the prostate based on magnetic resonance (Magnetic Resonance, MR) image is very tired
Difficult, mainly caused by following factor:
One, prostate is similar to the tissue of surrounding, lacks clearly boundary;
Two, different objects, different diseases, different image-forming conditions cause prostate to have larger difference in shapes and sizes
It is different.
Many prostate dividing methods have been proposed at present, but the segmentation result of these methods still has with segmentation by hand
Biggish difference.Moreover, most of dividing methods are primarily directed to entire prostata tissue, not to the central gland of prostate
It is split with outer region.
The automatic segmentation of central gland and outer region in prostata tissue, the problem can be counted as medical image
Semantic segmentation, that is, give each pixel in image specified a kind of label.Currently, full convolutional network (Fully
Convolutional Networks, FCN) effective tool that can be carried out semantic segmentation is had proven to, it can be right simultaneously
Each target in image is split.
In the implementation of the present invention, at least there are the following problems in the prior art: full convolutional network for inventor's discovery
Obtained result is accurate not enough, and not good enough to the segmentation of some details when Medical Image Segmentation, performance still needs
It is further improved.
Summary of the invention
The purpose of the invention is to overcome the shortcomings of above-mentioned background technique, a kind of point of prostate magnetic resonance image is provided
Segmentation method and system can be realized and divide intraprostatic central gland and outer region automatically from magnetic resonance image.
In a first aspect, providing a kind of dividing method of prostate magnetic resonance image, comprising the following steps:
In the training stage, image is inputted into full convolutional network, obtains corresponding output probability, calculates output probability and label
Between cross entropy;Weight map is calculated according to image and label, cross entropy is multiplied in the way of pixel to pixel with weight map,
Final loss is obtained, the parameter of full convolutional network is adjusted, the loss is made to reach minimum value;
In the segmentation stage, prostate magnetic resonance image to be split is inputted into trained full convolutional network, obtains just dividing
Cut result.
Above-mentioned technical proposal can be realized divides intraprostatic each region from magnetic resonance image automatically, i.e. prostate group
The automatic segmentation of central gland and outer region in knitting.
According in a first aspect, in the first possible implementation of the first aspect, the calculation formula of the weight map
Are as follows:
Wherein, wiIt (x) is weight map, IxFor the gray value of original image, yxFor label figure, Grad (Ix) indicate original image gradient,Indicate that the increased part of weighted value and the gradient of original image are inversely proportional, Morphology (yx) it is morphological operation,
For controlling the spatial dimension of the increased pixel of weighted value, aiIt is that control weighted value increases how many coefficients, biIt is without increasing power
Foundation of the pixel in final loss function of weight, i=0,1 or 2, respectively correspond background in label, outer region,
Central gland.
The technical program is that prostate MR image devises a new weight map calculation, is used for weighting loss letter
Number assigns higher weight to the pixel divided is difficult in prostate MR image, and weight map includes three ingredients, respectively corresponds back
Scape, outer region, central gland promote each region of the more preferable Ground Split prostate of deep learning model.
According to the first possible implementation of first aspect, in second of possible implementation of first aspect
In, the Morphology (yx) it is by subtracting each other to obtain after label figure is expanded and corroded respectively: Morphology (yx)
=Dilation (yx,smi)-Erosion(yx,smi), wherein Dilation (yx,smi) it is to be expanded respectively to label figure
It is obtaining as a result, Erosion (yx,smi) it is to be corroded respectively to label figure as a result, smiIt is morphology element, uses
To control the range of expansion and etching operation.
According in a first aspect, in a third possible implementation of the first aspect, after obtaining just segmentation result, going back
The following steps are included:
On the basis of first segmentation result, manual adjustment is carried out, final segmentation result is obtained.
In the result divided automatically, manual adjustment further can also be carried out by doctor.
According in a first aspect, in a fourth possible implementation of the first aspect, the parameter of the full convolutional network
Refer to the weight of neuron in full convolutional network model.
Second aspect provides a kind of segmenting system of prostate magnetic resonance image, comprising:
Training unit is used for: in the training stage, image being inputted full convolutional network, obtains corresponding output probability, is calculated
Cross entropy between output probability and label;Weight map is calculated according to image and label, extremely by pixel by cross entropy and weight map
The mode of pixel is multiplied, and obtains final loss, adjusts the parameter of full convolutional network, and the loss is made to reach minimum value;
Cutting unit is used for: in the segmentation stage, prostate magnetic resonance image to be split being inputted trained full convolution
Network obtains just segmentation result.
Above-mentioned technical proposal can be realized divides intraprostatic each region from magnetic resonance image automatically, i.e. prostate group
The automatic segmentation of central gland and outer region in knitting.
According to second aspect, in the first possible implementation of the second aspect, the calculation formula of the weight map
Are as follows:
Wherein, wiIt (x) is weight map, IxFor the gray value of original image, yxFor label figure, Grad (Ix) indicate original image gradient,Indicate that the increased part of weighted value and the gradient of original image are inversely proportional, Morphology (yx) it is morphological operation,
For controlling the spatial dimension of the increased pixel of weighted value, aiIt is that control weighted value increases how many coefficients, biIt is without increasing power
Foundation of the pixel in final loss function of weight, i=0,1 or 2, respectively correspond background in label, outer region,
Central gland.
The technical program is that prostate MR image devises a new weight map calculation, is used for weighting loss letter
Number assigns higher weight to the pixel divided is difficult in prostate MR image, and weight map includes three ingredients, respectively corresponds back
Scape, outer region, central gland promote each region of the more preferable Ground Split prostate of deep learning model.
According to the first possible implementation of second aspect, in second of possible implementation of second aspect
In, the Morphology (yx) it is by subtracting each other to obtain after label figure is expanded and corroded respectively: Morphology (yx)
=Dilation (yx,smi)-Erosion(yx,smi), wherein Dilation (yx,smi) it is to be expanded respectively to label figure
It is obtaining as a result, Erosion (yx,smi) it is to be corroded respectively to label figure as a result, smiIt is morphology element, uses
To control the range of expansion and etching operation.
According to second aspect, in the third possible implementation of the second aspect, the system also includes:
Manual adjustment unit, is used for: on the basis of first segmentation result, carrying out manual adjustment, obtains final segmentation knot
Fruit.
In the result divided automatically, manual adjustment further can also be carried out by doctor.
According to second aspect, in the fourth possible implementation of the second aspect, the parameter of the full convolutional network
Refer to the weight of neuron in full convolutional network model.
Compared with prior art, advantages of the present invention is as follows:
The present invention inputs full convolutional network in the training stage, by image, obtains corresponding output probability, calculates output probability
Cross entropy between label, meanwhile, weight map is calculated according to image and label, by cross entropy and weight map by pixel to pixel
Mode be multiplied, obtain final loss, constantly adjust the parameter of full convolutional network, the loss is made to reach minimum value;Here
The parameter of full convolutional network refers to the weight of neuron in full convolutional network model.In the segmentation stage, by prostate to be split
Magnetic resonance image inputs trained full convolutional network, obtains just segmentation result.The present invention can be realized from magnetic resonance image certainly
Dynamic intraprostatic each region of segmentation, the i.e. automatic segmentation of central gland in prostata tissue and outer region, automatic
In the result of segmentation, manual adjustment further can also be carried out by doctor.
Detailed description of the invention
Fig. 1 is the calculation flow chart in the embodiment of the present invention for the loss function of training pattern.
Fig. 2 is in the embodiment of the present invention do not include manual adjustment step prostate magnetic resonance image dividing method stream
Cheng Tu.
Fig. 3 is the process of the dividing method of the prostate magnetic resonance image in the embodiment of the present invention including manual adjustment step
Figure.
Specific embodiment
It reference will now be made in detail to specific embodiments of the present invention now, instantiate example of the invention in the accompanying drawings.Although will knot
The specific embodiment description present invention is closed, it will be understood that, it is not intended to limit the invention to the embodiment described.On the contrary, it is desirable to cover
Cover the change for including within the spirit and scope of the present invention, modification and the equivalent being defined by the following claims.It should be noted that this
In the method and step that describes can realize that and any functional block or function arrangement can quilts by any functional block or function arrangement
It is embodied as the combination of physical entity or logic entity or the two.
In order to make those skilled in the art more fully understand the present invention, with reference to the accompanying drawings and detailed description to this hair
It is bright to be described in further detail.
Note: the example next to be introduced is only a specific example, and not as limiting embodiments of the invention
It is necessary for following specific step, numerical value, condition, data, sequence etc..Those skilled in the art can be by reading this explanation
Book constructs the more embodiments that do not mention in this specification to use design of the invention.
The embodiment of the present invention provides a kind of dividing method of prostate magnetic resonance image, comprising the following steps:
It is shown in Figure 1 in the training stage, image is inputted into full convolutional network, obtains corresponding output probability, is calculated defeated
Cross entropy between probability and label out, meanwhile, weight map is calculated according to image and label, cross entropy and weight map are pressed into pixel
Mode to pixel is multiplied, and obtains final loss, constantly adjusts the parameter of full convolutional network, the loss is made to reach minimum value;
Here the parameter of full convolutional network refers to the weight of neuron in full convolutional network model.
Dividing the stage, it is shown in Figure 2, prostate magnetic resonance image to be split is inputted into trained full convolution net
Network obtains just segmentation result.
Above-mentioned technical proposal can be realized divides intraprostatic each region from magnetic resonance image automatically, i.e. prostate group
The automatic segmentation of central gland and outer region in knitting.
It is shown in Figure 3 on the basis of first segmentation result as optional embodiment, it can also be carried out by doctor
Manual adjustment obtains final segmentation result.
The embodiment of the present invention also provides a kind of segmenting system of prostate magnetic resonance image, comprising:
Training unit is used for: in the training stage, image being inputted full convolutional network, obtains corresponding output probability, is calculated
Cross entropy between output probability and label;Weight map is calculated according to image and label, extremely by pixel by cross entropy and weight map
The mode of pixel is multiplied, and obtains final loss, adjusts the parameter of full convolutional network, and the loss is made to reach minimum value;Here
The parameter of full convolutional network refers to the weight of neuron in full convolutional network model.
Cutting unit is used for: in the segmentation stage, prostate magnetic resonance image to be split being inputted trained full convolution
Network obtains just segmentation result.
Above-mentioned technical proposal can be realized divides intraprostatic each region from magnetic resonance image automatically, i.e. prostate group
The automatic segmentation of central gland and outer region in knitting.
As optional embodiment, the system further include:
Manual adjustment unit, is used for: on the basis of first segmentation result, carrying out manual adjustment, obtains final segmentation knot
Fruit.
Prostate MR image segmentation is considered a semantic segmentation problem, i.e., specified to each pixel in image
A kind of label.Full convolutional network (Fully Convolutional Networks, FCN) has proven to one and can be carried out language
The effective tool of justice segmentation can simultaneously be split each target in image.One of basic problem as pattern-recognition,
Semantic segmentation is to explain new data with the knowledge learnt from known data and label.The process is divided into two stages,
First is that a model is trained using existing data and label, second is that inferring new data with trained model
Label specifies class label to each pixel in new data diagram picture, carry out semantic segmentation.The training process of model can retouch
State as follows, in the case where giving a data set, one model with parameter of training makes corresponding loss function reach minimum
Value, mathematical formulae is:
min{∑(x,y)∈DL(fθ(x),y)} (1)
Wherein, θ is the parameter of depth network, and y refers to label, ∑(x,y)Indicate that the loss to all positions is summed, L (fθ
(x), y) it is loss function for punishing error label, D is training sample set.
The mathematical formulae of Weighted Loss Function is:
Wherein, w (x) is weight map,It is the output probability of modelCross entropy between label y, n are indicated
The number of whole pixels.
The mentality of designing of weight map w (x) that the embodiment of the present invention proposes is: to being difficult to the picture divided in prostate MR image
Element assigns higher weight, and weight map w (x) includes three ingredients, respectively corresponds background, outer region, central gland, therefore,
It can also be expressed as wi(x), in the embodiment of the present invention weight map calculation formula are as follows:
Wherein, wiIt (x) is weight map, IxFor the gray value of original image, yxFor label figure, Grad (Ix) indicate original image gradient,Indicate that the increased part of weighted value and the gradient of original image are inversely proportional, Morphology (yx) it is morphological operation,
For controlling the spatial dimension of the increased pixel of weighted value, aiIt is that control weighted value increases how many coefficients, biIt is without increasing power
Foundation of the pixel in final loss function of weight, i=0,1 or 2, respectively correspond background in label, outer region,
Central gland.
Indicate that the increased part of weighted value and the gradient of original image are inversely proportional, if reason is prostate region
The gradient value of adjacent edges is smaller, it is meant that edge is more unintelligible, then weighted value needs are increased more.
The specific value of parameters is rule of thumb manually arranged in above-mentioned formula (3).
Weight map w in the embodiment of the present inventioni(x) consist of two parts:
First part is:Indicate the increased part of weighted value;
Second part is: bi, indicate the foundation of weighted value.
Morphology(yx) it is morphological operation, for controlling the spatial dimension of the increased pixel of weighted value, that is, it is used to
The pixel of which position in every MR image is specified to need to be increased weight.
Morphology(yx) it is by subtracting each other to obtain after label figure is expanded and corroded respectively: Morphology
(yx)=Dilation (yx,smi)-Erosion(yx,smi), wherein Dilation (yx,smi) it is to be carried out respectively to label figure
Expand obtain as a result, Erosion (yx,smi) it is to be corroded respectively to label figure as a result, smiIt is morphology member
Element, for controlling the range of expansion and etching operation.
Prostate MR image segmentation in the embodiment of the present invention can be divided into two stages with process, first is that using existing number
According to a model is trained with label, second is that inferring the label of new data with trained model, that is, segmentation is obtained
As a result.
Training pattern on the basis of data-oriented collection of the embodiment of the present invention, makes corresponding loss function reach minimum value,
Calculating process is shown in Figure 1, and image data is inputted full convolutional network first, obtains corresponding output probability, calculates output
Cross entropy between probability and label;Meanwhile weight map is calculated according to image and label;Extremely by pixel by cross entropy and weight map
The mode of pixel is multiplied, and obtains final loss.The training process of model is exactly the parameter for constantly adjusting full convolutional network, entirely
The parameter of convolutional network specifically refers to the weight of neuron in full convolutional network model, and the loss function is made to reach the mistake of minimum value
Journey.
The cutting procedure of image to be split is shown in Figure 2, firstly, image to be split is inputted trained full convolution net
Network is exported as a result, carry out some post-processings in this result, obtains just segmentation result, it includes that intermediate value is filtered that post-processing, which calculates,
Wave etc..
Optionally, shown in Figure 3, manual adjustment is carried out by doctor on the basis of first segmentation result, is obtained final
Segmentation result.
Manual adjustment in the embodiment of the present invention it is not necessary to the step of, if the obtained result in the step of front is
Through relatively more accurate, then without manual adjustment.
The embodiment of the present invention is tested the segmentation performance assessment (without manual adjustment process) on disclosed data set
Evaluation index is used as using Dice coefficient (Dice coefficient, DSC), Dice coefficient be reference picture and segmented image it
Between intersection and union ratio, the value of Dice coefficient is between 0 and 1, and value is higher, and to represent segmentation result more quasi-.
Compared with cross entropy loss function, comparison result ginseng is shown in Table 1 the method for the embodiment of the present invention.
Table 1, the method for the embodiment of the present invention and cross entropy loss function comparison result
From table 1 it follows that assessed using DSC, in the segmentation of central gland, the segmentation of the embodiment of the present invention
Performance (0.8831) is higher by 0.0274 than the performance of cross entropy loss function (0.8557);In the segmentation of outer region, the present invention
The segmentation performance (0.7576) of embodiment is higher by 0.0405 than the performance of cross entropy loss function (0.7171), in each of prostate
In the segmentation in region, the method for the embodiment of the present invention is more than cross entropy loss function.
The advantage of the embodiment of the present invention is to devise a new weight map calculation for prostate MR image, be used for
Weighted Loss Function promotes each region of the more preferable Ground Split prostate of deep learning model.It is carried out on disclosed data set
Test, the method for the embodiment of the present invention obtain excellent performance, the method that the embodiment of the present invention proposes be it is universal enough,
It can extend in other medical image segmentation tasks, such as: liver segmentation and cardiac segmentation.
Note: above-mentioned specific embodiment is only example rather than limits, and those skilled in the art can be according to the present invention
Design merge from above-mentioned each embodiment described separately and combine some steps and device to realize effect of the invention,
This merging and the embodiment being composed are also included in the present invention, and do not describe this merging and combination one by one herein.
Advantage as mentioned in the embodiments of the present invention, advantage, effect etc. are only examples, rather than are limited, and must not believe that these are excellent
Point, advantage, effect etc. are that each embodiment of the invention is prerequisite.In addition, disclosed by the embodiments of the present invention above-mentioned specific
Details limits merely to exemplary effect and the effect that is easy to understand, and above-mentioned details is not intended to limit the embodiment of the present invention
It must be realized using above-mentioned concrete details.
Device involved in the embodiment of the present invention, device, equipment, system block diagram be only used as illustrative example, and
And it is not intended to require or imply to be attached in such a way that box illustrates, arrange, configure.Such as those skilled in the art
It will be recognized, can be connected by any way, arrange, configure these devices, device, equipment, system.Such as " comprising ", " packet
Containing ", the word of " having " etc. be open vocabulary, refer to " including but not limited to ", and can be used interchangeably with it.The present invention is implemented
Vocabulary "or" and "and" used in example refer to vocabulary "and/or", and can be used interchangeably with it, unless context is explicitly indicated is not
So.Vocabulary used in the embodiment of the present invention " such as " refers to phrase " such as, but not limited to ", and can be used interchangeably with it.
Step flow chart and above method in the embodiment of the present invention describe only as illustrative example, and unexpectedly
Figure requires or implies the step of must carrying out each embodiment according to the sequence provided.As those skilled in the art will appreciate that
, the sequence of the step in above embodiments can be carried out in any order.Such as " thereafter ", " then ", " following " etc.
Word be not intended to limit the sequence of step;These words are only used for the description that guidance reader reads over these methods.In addition, for example
Using article "one", " one " or "the" be not interpreted the element being limited to list for any reference of the element of odd number
Number.
In addition, the step and device in each embodiment of the present invention are not only defined in some embodiment and carry out, it is true
On, relevant part steps and partial devices in each embodiment herein can be combined with concept according to the present invention, with
Conceive new embodiment, and these new embodiments are intended to be included within the scope of the present invention.
Each operation in the embodiment of the present invention can be and being able to carry out any means appropriate of corresponding function
It carries out.The means may include various hardware and or software components and/or module, the including but not limited to circuit of hardware, ASIC
(Application Specific Integrated Circuit, specific integrated circuit) or processor.
In practical applications, the general processor for being designed to execute above-mentioned function, DSP (Digital be can use
Signal Processor, digital signal processor), ASIC, FPGA (Field Programmable Gate Array, scene
Programmable gate array) or CPLD (Complex Programmable Logic Device, Complex Programmable Logic Devices), from
Door or transistor logic, discrete hardware component or any combination thereof are dissipated, to realize logical block, the module of above-mentioned each illustration
And circuit.Wherein, general processor can be microprocessor, but as an alternative, the processor can be and any commercially may be used
Processor, controller, microcontroller or the state machine of acquisition.Processor is also implemented as calculating the combination of equipment, such as DSP
With the combination of microprocessor, multi-microprocessor, the one or more microprocessors to cooperate with DSP core or any other as
Configuration.
The method or the step of algorithm described in conjunction with the embodiment of the present invention can be directly embedded within hardware, processor executes
Software module in or both combination in.Software module can reside in any type of tangible media.It can
Some examples with the storage medium used include RAM (Random Access Memory, random access memory), ROM (Read-
Only Memory, read-only memory), flash memory, EPROM (Electrically Programmable Read-
OnlyMemory, erasable programmable read only memory), EEPROM (Electrically-Erasable
Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), register, hard disc, removable dish,
CD-ROM (Compact Disc Read-Only Memory, compact disk read-only memory) etc..Storage medium can couple
To processor so that the processor can be from the read information and to the storage medium write information.In alternative
In, storage medium can be whole with processor.Software module can be single instruction or many instructions, and can divide
Cloth is on several different code segments, between different programs and across multiple storage mediums.
The method of the embodiment of the present invention includes the one or more movement for realizing above-mentioned method.Method and/or dynamic
Work can be interchangeable with one another without departing from the scope of the claims.In other words, unless specifying the specific order of movement, otherwise may be used
To modify the sequence specifically acted and/or use without departing from the scope of the claims.
Function in the embodiment of the present invention can be realized by hardware, software, firmware or any combination thereof.If with software
It realizes, function can be used as one or more instructions and be stored on practical computer-readable medium.Storage medium can be can
With any available tangible media accessed by computer.It by example rather than limits, such computer-readable medium can
To include RAM, ROM, EEPROM, CD-ROM or the storage of other optical discs, magnetic disk storage or other magnetic memory devices or can use
In carry or the desired program code of store instruction or data structure form and can be accessed by computer any other
Tangible media.As used herein, dish (disk) and disk (disc) include compact disk (CD), laser disk, CD, DVD
(Digital Versatile Disc, digital versatile disc), soft dish and Blu-ray disc, wherein dish passes through magnetic reproduce data, and
Disk using laser optics reproduce data.
Therefore, computer program product can carry out operation given herein.For example, such computer program product can
To be the computer-readable tangible medium with the instruction of tangible storage (and/or coding) on it, which can be by one
Or multiple processors are executed to carry out operation described herein.Computer program product may include the material of packaging.
Software or instruction in the embodiment of the present invention can also be transmitted by transmission medium.It is, for example, possible to use such as
Coaxial cable, optical fiber cable, twisted pair, DSL (Digital Subscriber Line, digital subscriber line) or such as infrared,
The transmission medium of the wireless technology of radio or microwave is from website, server or other remote source softwares.
In addition, can be with for realizing the module of the methods and techniques in the embodiment of the present invention and/or other means appropriate
It is obtained in due course by user terminal and/or base station downloading and/or other modes.For example, such equipment can be couple to clothes
Device be engaged in promote the transmission of the means for carrying out method described herein.Alternatively, various methods described herein can be via
Storage unit (such as physical storage medium of RAM, ROM, CD or soft dish etc.) provides, so as to user terminal and/or base station
Various methods can be obtained when being couple to the equipment or providing storage unit to the equipment.Furthermore, it is possible to utilize for inciting somebody to action
Methods and techniques described herein are supplied to any other technology appropriate of equipment.
Other examples and implementation are in the embodiment of the present invention and scope of the appended claims and spirit.For example, by
In the essence of software, functionality described above be can be used by processor, hardware, firmware, hardwired or these arbitrary group
Close the software realization executed.Realize that the feature of function also may be physically located at each position, including being distributed so as to function
It is realized in different physical locations part.Moreover, it is as used herein, it is included in used in claim, with " at least
One " "or" instruction separation used in the enumerating of item that starts enumerates, so as to the column of such as " at least one of A, B or C "
Act means A or B or C or AB or AC or BC or ABC (i.e. A and B and C).In addition, wording " exemplary " does not mean that description
Example is preferred or more preferable than other examples.
Those skilled in the art can not depart from the technology instructed defined by the appended claims and carry out in this institute
Various changes, replacement and the change for the technology stated.In addition, the scope of the claims of the disclosure is not limited to above-described place
Reason, machine, manufacture, the composition of event, means, the specific aspect of method and movement.It can use and respective party described herein
Face carry out essentially identical function or realize essentially identical result there is currently or later to be developed processing, machine
Device, manufacture, the composition of event, means, method or movement.Thus, appended claims include such place within its scope
Reason, machine, manufacture, the composition of event, means, method or movement.
The above description of disclosed aspect is provided so that any person skilled in the art can make or use this
Invention.Various modifications in terms of these are readily apparent to those skilled in the art, and are defined herein
General Principle can be applied to other aspect without departing from the scope of the present invention.Therefore, the present invention is not intended to be limited to
Aspect shown in this, but according to principle disclosed herein and the consistent widest range of novel feature.
In order to which purpose of illustration and description has been presented for above description.In addition, this description is not intended to reality of the invention
It applies example and is restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this field skill
Its certain modifications, modification, change, addition and sub-portfolio will be recognized in art personnel.
Claims (10)
1. a kind of dividing method of prostate magnetic resonance image, which comprises the following steps:
In the training stage, image is inputted into full convolutional network, obtains corresponding output probability, is calculated between output probability and label
Cross entropy;Weight map is calculated according to image and label, cross entropy is multiplied in the way of pixel to pixel with weight map, is obtained
Final loss adjusts the parameter of full convolutional network, and the loss is made to reach minimum value;
In the segmentation stage, prostate magnetic resonance image to be split is inputted into trained full convolutional network, obtains just dividing knot
Fruit.
2. the dividing method of prostate magnetic resonance image as described in claim 1, it is characterised in that: the calculating of the weight map
Formula are as follows:
Wherein, wiIt (x) is weight map, IxFor the gray value of original image, yxFor label figure, Grad (Ix) indicate original image gradient,Indicate that the increased part of weighted value and the gradient of original image are inversely proportional, Morphology (yx) it is morphological operation,
For controlling the spatial dimension of the increased pixel of weighted value, aiIt is that control weighted value increases how many coefficients, biIt is without increasing power
Foundation of the pixel in final loss function of weight, i=0,1 or 2, respectively correspond background in label, outer region,
Central gland.
3. the dividing method of prostate magnetic resonance image as claimed in claim 2, it is characterised in that: the Morphology
(yx) it is by subtracting each other to obtain after label figure is expanded and corroded respectively: Morphology (yx)=Dilation (yx,
smi)-Erosion(yx,smi), wherein Dilation (yx,smi) be label figure is expanded respectively as a result,
Erosion(yx,smi) it is to be corroded respectively to label figure as a result, smiMorphology element, for control expansion and
The range of etching operation.
4. the dividing method of prostate magnetic resonance image as described in claim 1, it is characterised in that: obtain just segmentation result it
Afterwards, further comprising the steps of:
On the basis of first segmentation result, manual adjustment is carried out, final segmentation result is obtained.
5. the dividing method of prostate magnetic resonance image as described in claim 1, it is characterised in that: the full convolutional network
Parameter refers to the weight of neuron in full convolutional network model.
6. a kind of segmenting system of prostate magnetic resonance image characterized by comprising
Training unit is used for: in the training stage, image being inputted full convolutional network, obtains corresponding output probability, calculates output
Cross entropy between probability and label;Weight map is calculated according to image and label, by cross entropy and weight map by pixel to pixel
Mode be multiplied, obtain final loss, adjust the parameter of full convolutional network, the loss is made to reach minimum value;
Cutting unit is used for: in the segmentation stage, prostate magnetic resonance image to be split being inputted trained full convolution net
Network obtains just segmentation result.
7. the segmenting system of prostate magnetic resonance image as claimed in claim 6, it is characterised in that: the calculating of the weight map
Formula are as follows:
Wherein, wiIt (x) is weight map, IxFor the gray value of original image, yxFor label figure, Grad (Ix) indicate original image gradient,Indicate that the increased part of weighted value and the gradient of original image are inversely proportional, Morphology (yx) it is morphological operation,
For controlling the spatial dimension of the increased pixel of weighted value, aiIt is that control weighted value increases how many coefficients, biIt is without increasing power
Foundation of the pixel in final loss function of weight, i=0,1 or 2, respectively correspond background in label, outer region,
Central gland.
8. the segmenting system of prostate magnetic resonance image as claimed in claim 7, it is characterised in that: the Morphology
(yx) it is by subtracting each other to obtain after label figure is expanded and corroded respectively: Morphology (yx)=Dilation (yx,
smi)-Erosion(yx,smi), wherein Dilation (yx,smi) be label figure is expanded respectively as a result,
Erosion(yx,smi) it is to be corroded respectively to label figure as a result, smiMorphology element, for control expansion and
The range of etching operation.
9. the segmenting system of prostate magnetic resonance image as claimed in claim 6, it is characterised in that: the system further include:
Manual adjustment unit, is used for: on the basis of first segmentation result, carrying out manual adjustment, obtains final segmentation result.
10. the segmenting system of prostate magnetic resonance image as claimed in claim 6, it is characterised in that: the full convolutional network
Parameter refer to the weight of neuron in full convolutional network model.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811538977.3A CN109636813B (en) | 2018-12-14 | 2018-12-14 | Method and system for segmentation of prostate magnetic resonance images |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811538977.3A CN109636813B (en) | 2018-12-14 | 2018-12-14 | Method and system for segmentation of prostate magnetic resonance images |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109636813A true CN109636813A (en) | 2019-04-16 |
| CN109636813B CN109636813B (en) | 2020-10-30 |
Family
ID=66074440
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811538977.3A Expired - Fee Related CN109636813B (en) | 2018-12-14 | 2018-12-14 | Method and system for segmentation of prostate magnetic resonance images |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109636813B (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110189332A (en) * | 2019-05-22 | 2019-08-30 | 中南民族大学 | Prostate Magnetic Resonance Image Segmentation method and system based on weight G- Design |
| CN110689548A (en) * | 2019-09-29 | 2020-01-14 | 浪潮电子信息产业股份有限公司 | Medical image segmentation method, device, equipment and readable storage medium |
| CN111028206A (en) * | 2019-11-21 | 2020-04-17 | 万达信息股份有限公司 | Prostate cancer automatic detection and classification system based on deep learning |
| CN113476033A (en) * | 2021-08-18 | 2021-10-08 | 华中科技大学同济医学院附属同济医院 | Method for automatically generating MRI benign prostatic hyperplasia target region based on deep neural network |
| CN115619810A (en) * | 2022-12-19 | 2023-01-17 | 中国医学科学院北京协和医院 | Method, system and equipment for partitioning prostate |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8811701B2 (en) * | 2011-02-23 | 2014-08-19 | Siemens Aktiengesellschaft | Systems and method for automatic prostate localization in MR images using random walker segmentation initialized via boosted classifiers |
| CN107240102A (en) * | 2017-04-20 | 2017-10-10 | 合肥工业大学 | Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm |
| CN107886510A (en) * | 2017-11-27 | 2018-04-06 | 杭州电子科技大学 | A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks |
| CN108053417A (en) * | 2018-01-30 | 2018-05-18 | 浙江大学 | A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature |
| CN108345887A (en) * | 2018-01-29 | 2018-07-31 | 清华大学深圳研究生院 | The training method and image, semantic dividing method of image, semantic parted pattern |
-
2018
- 2018-12-14 CN CN201811538977.3A patent/CN109636813B/en not_active Expired - Fee Related
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8811701B2 (en) * | 2011-02-23 | 2014-08-19 | Siemens Aktiengesellschaft | Systems and method for automatic prostate localization in MR images using random walker segmentation initialized via boosted classifiers |
| CN107240102A (en) * | 2017-04-20 | 2017-10-10 | 合肥工业大学 | Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm |
| CN107886510A (en) * | 2017-11-27 | 2018-04-06 | 杭州电子科技大学 | A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks |
| CN108345887A (en) * | 2018-01-29 | 2018-07-31 | 清华大学深圳研究生院 | The training method and image, semantic dividing method of image, semantic parted pattern |
| CN108053417A (en) * | 2018-01-30 | 2018-05-18 | 浙江大学 | A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature |
Non-Patent Citations (1)
| Title |
|---|
| 徐峰 等: "《基于U-Net的结节分割方法》", 《软件导刊》 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110189332A (en) * | 2019-05-22 | 2019-08-30 | 中南民族大学 | Prostate Magnetic Resonance Image Segmentation method and system based on weight G- Design |
| CN110689548A (en) * | 2019-09-29 | 2020-01-14 | 浪潮电子信息产业股份有限公司 | Medical image segmentation method, device, equipment and readable storage medium |
| CN110689548B (en) * | 2019-09-29 | 2023-01-17 | 浪潮电子信息产业股份有限公司 | A medical image segmentation method, device, equipment and readable storage medium |
| CN111028206A (en) * | 2019-11-21 | 2020-04-17 | 万达信息股份有限公司 | Prostate cancer automatic detection and classification system based on deep learning |
| CN113476033A (en) * | 2021-08-18 | 2021-10-08 | 华中科技大学同济医学院附属同济医院 | Method for automatically generating MRI benign prostatic hyperplasia target region based on deep neural network |
| CN115619810A (en) * | 2022-12-19 | 2023-01-17 | 中国医学科学院北京协和医院 | Method, system and equipment for partitioning prostate |
| CN115619810B (en) * | 2022-12-19 | 2023-10-03 | 中国医学科学院北京协和医院 | A prostate segmentation method, system and equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109636813B (en) | 2020-10-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109636813A (en) | The dividing method and system of prostate magnetic resonance image | |
| Kumar et al. | Automated and real-time segmentation of suspicious breast masses using convolutional neural network | |
| TWI828109B (en) | Interactive training of a machine learning model for tissue segmentation | |
| CN104851101A (en) | Brain tumor automatic segmentation method based on deep learning | |
| CN111179237A (en) | A kind of liver and liver tumor image segmentation method and device | |
| CN106600621B (en) | A spatiotemporal collaborative segmentation method based on multimodal MRI images of infant brain tumors | |
| Kanna et al. | A review on prediction and prognosis of the prostate cancer and gleason grading of prostatic carcinoma using deep transfer learning based approaches | |
| Chen et al. | MRI brain tissue classification using unsupervised optimized extenics-based methods | |
| Hesamian et al. | Synthetic CT images for semi-sequential detection and segmentation of lung nodules | |
| Xu et al. | Novel robust automatic brain-tumor detection and segmentation using magnetic resonance imaging | |
| Maillard et al. | A deep residual learning implementation of metamorphosis | |
| Nowinski et al. | A 3D model of human cerebrovasculature derived from 3T magnetic resonance angiography | |
| CN109685814A (en) | Cholecystolithiasis ultrasound image full-automatic partition method based on MSPCNN | |
| Wu et al. | Auto-contouring via automatic anatomy recognition of organs at risk in head and neck cancer on CT images | |
| Diaz-Pinto et al. | Retinal image synthesis for glaucoma assessment using DCGAN and VAE models | |
| Chen et al. | Adversarial robustness study of convolutional neural network for lumbar disk shape reconstruction from MR images | |
| Na et al. | Radiomicsfill-mammo: Synthetic mammogram mass manipulation with radiomics features | |
| Sha | Segmentation of ovarian cyst in ultrasound images using AdaResU-net with optimization algorithm and deep learning model | |
| Kumar et al. | E-fuzzy feature fusion and thresholding for morphology segmentation of brain MRI modalities | |
| CN110189332A (en) | Prostate Magnetic Resonance Image Segmentation method and system based on weight G- Design | |
| Guo et al. | Brain tumor segmentation based on attention mechanism and multi-model fusion | |
| CN114399501A (en) | Deep learning convolutional neural network-based method for automatically segmenting prostate whole gland | |
| CN118351115B (en) | Determination method, system, device, equipment and medium for ablation area | |
| CN113907710A (en) | A skin lesion classification system based on model-independent image-augmented meta-learning | |
| Deshpande et al. | Train small, generate big: Synthesis of colorectal cancer histology images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201030 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |