CN117036695A - A deep neural network image segmentation method based on smoothness constraints - Google Patents
A deep neural network image segmentation method based on smoothness constraints Download PDFInfo
- Publication number
- CN117036695A CN117036695A CN202310910962.XA CN202310910962A CN117036695A CN 117036695 A CN117036695 A CN 117036695A CN 202310910962 A CN202310910962 A CN 202310910962A CN 117036695 A CN117036695 A CN 117036695A
- Authority
- CN
- China
- Prior art keywords
- loss
- function
- smoothness
- segmentation
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a depth neural network image segmentation method based on smoothness constraint, which adopts a smoothness loss calculation method based on curve evolution as a loss function of a neural network segmentation model, firstly converts segmentation labels into a symbol distance function, and provides a content loss measurement based on distance penalty, wherein for pixel points with wrong segmentation, the loss is positively correlated with the distance from the label boundary, so that a small region with wrong segmentation is effectively avoided; then, an unsupervised length loss metric is proposed, which minimizes the boundary length of the prediction Mask based on the level set method, and ensures the smoothness of the segmentation result. Finally, in order to solve the problem that the segmentation target in the prediction graph disappears and is mistakenly segmented due to the fact that the length loss function is possibly over-constrained, an area compensation loss measurement is provided to provide an outward compensation loss in the process of segmenting the image.
Description
Technical Field
The invention belongs to the technical field of deep neural networks, and particularly relates to a deep neural network image segmentation method based on smoothness constraint.
Background
Image segmentation is a technique and process of dividing an image into several specific regions with unique properties and presenting objects of interest. With the rapid development of deep learning in recent years, deep neural networks (Deep Neural Network, DNN) have made great progress in the field of image analysis in terms of their powerful feature extraction capability. The construction process mainly comprises two parts: model structural design and model parameter training.
The loss function is a distance measurement method for evaluating a model predicted value and a real label, plays an important role in improving the accuracy and stability of model segmentation, and the loss function based on image segmentation can be roughly divided into four types: distribution-based loss, region-based loss, boundary-based loss, and composite-based loss.
The most commonly used distribution-based loss function is the Cross-Entropy loss function (Binary Cross-Entropy), which is often activated by Sigmoid or Softmax in classification tasks. When the number of foreground pixels is far smaller than that of background pixels, that is, the number of y=0 is far greater than that of y=1, the y=0 component in the loss function will take the dominant role, and the data set is inclined, so that the model prediction result is finally caused to be seriously biased towards the background, and the model prediction result is worse in the practical application process. In addition, a weighted cross Entropy loss function (Weighted Binary Cross-Entropy) for weighting positive samples is carried out on the basis of the cross Entropy loss function, so that a better effect can be obtained under the condition of unbalanced sample number, but the weights of difficult samples are required to be adjusted manually, and the parameter adjustment difficulty is high.
A common region-based Loss function is the Dice Loss function (Dice Loss) adapted from Dice coefficients. However, the Dice loss function is very disadvantageous to the small target, because in the case of only foreground and background, once the small target has partial pixel prediction errors, the Dice will change greatly, thus causing severe gradient change and unstable training. Furthermore, the Dice loss function equally weights FP and FN, which in practice results in segmented images with high accuracy but low recall.
The third class of loss functions is boundary-based, such as Hausdorff distance loss (Hausdorff Distance Loss). But are not widely used because of their non-convexity. The last class of loss functions is based on complex loss functions, such as exponential logarithmic loss functions (ExponentialLogarithmic Loss). It may force the network to focus on portions of the prediction that are inaccurate to merge finer segmentation boundaries and accurate data distribution. But the newly added parameter weight brings no small trouble to the parameter adjustment.
Although the existing loss functions have made great progress, the loss functions are still loss metrics based on point contrast, i.e. pixel points of the segmentation result are compared with corresponding labels one by one. This approach makes it difficult to measure the accuracy of the segmentation result from a global perspective, resulting in a non-smooth segmentation result and small areas where erroneous segmentation occurs.
Disclosure of Invention
The invention aims to provide a depth neural network image segmentation method based on smoothness constraint aiming at the defects of the existing multi-level inverter, and the specific scheme is as follows:
the first aspect of the present invention provides a smoothness loss function based on curve evolution for a neural network segmentation model, comprising:
content loss function L based on distance penalty for loss measurement of pixel points of segmented image when used for image segmentation content ;
Unsupervised length loss function L for predicting boundary length of Mask for image prediction length The method comprises the steps of carrying out a first treatment on the surface of the The method comprises the steps of,
is used for compensating a length loss function L in the image segmentation process length Area compensation loss function L of over-segmentation prediction caused by over constraint area ;
The smoothness loss function convLetLoss based on curve evolution is defined as:
where α is the length loss function L length Beta is the area compensation loss function L area W is the image width.
The second aspect of the invention provides a deep neural network segmentation model based on smoothness constraint, which adopts the smoothness loss function based on curve evolution as a loss function of the neural network segmentation model.
The third aspect of the present invention provides a deep neural network segmentation apparatus based on smoothness constraint:
a memory; and
a processor coupled to the memory, the processor configured to implement the smoothness constraint-based deep neural network segmentation model when executing instructions based on instructions stored in the memory.
Compared with the prior art, the invention has outstanding substantive characteristics and remarkable progress, and concretely comprises the following steps:
the method firstly converts the segmentation labels into the symbol distance function, and provides a content loss measurement based on distance penalty, and for pixel points with wrong segmentation, the loss is positively correlated with the distance from the label boundary, so that a small region with wrong segmentation is effectively avoided; an unsupervised length loss metric is then presented that minimizes the boundary length of the predictive Mask based on the level set approach, thereby ensuring the smoothness of the segmentation results. Finally, in order to solve the situation that the segmentation target in the prediction graph disappears and is mistakenly segmented due to the fact that the length loss function is possibly over-constrained, an area compensation loss measurement is provided to provide an outward compensation loss in the process of segmenting the image, and therefore the problems that the segmentation result of the existing loss function is not smooth and a small area with mistaken segmentation appears are solved.
Drawings
Fig. 1 is a distance-converted image of a real outline tag in embodiment 1 of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more clear, the following description of the technical solutions in the embodiments of the present invention will be given in detail, but the present invention is not limited to these embodiments:
example 1
A smoothness loss calculation method based on curve evolution for a neural network segmentation model, comprising:
content loss metrics
In image segmentation, a content loss function L based on distance penalty is adopted content And carrying out loss measurement on the pixel points of the segmented image.
The neural network segmentation model belongs to the class of supervised training when executing the segmentation task, so that the real contour label of the image needs to be initialized into an SDF form before the neural network segmentation model executes the image segmentation task. The present embodiment uses a distance change function to perform this initialization process. The original contour label is generally set to have a background pixel value of 0 and a segmentation target region pixel value of 1. The distance transformation function is used for calculating the distance between each pixel and the nearest 0-value pixel, the loss of the pixel point is positively correlated with the distance from the boundary of the outline tag, and the input is a gray image. As shown in fig. 1, the brighter the image after the distance conversion, the farther the distance from the zero point is indicated.
In order for the neural network segmentation model to be able to output a form similar to an SDF, the activation function of the last output layer of the network needs to be set to a leakyrenu function, which is a variant of the widely used renu activation function, unlike the commonly used Sigmoid and renu functions, which output has a small slope for negative inputs. Since the derivative is always non-zero, this can reduce the occurrence of silent neurons, allowing gradient-based learning, solving the problem of neurons not learning after the Relu function enters the negative interval. The value of the negative axis is preserved by the LeakyRelu so that the information of the negative axis is not lost in its entirety.
The LeakyRelu activation function is defined as follows:
content loss function L based on distance penalty content The definition is as follows:
let Ω e R be the image area,: omega-R is the segmentation result output by the neural network segmentation model,: omega-R is a distance graph after the manual drawing result is converted.
Length loss metric
In image prediction, an unsupervised length loss function L is adopted length The boundary length of the Mask is predicted.
Training the neural network model by the content loss function has the disadvantage that the smoothness of the segmentation edges cannot be guaranteed, and in this embodiment, the following length loss metrics are defined:
p (x, y) =0 represents the target boundary predicted by the neural network, L length Integrating along the object boundary is equivalent to calculating the number of boundary points, i.e. the boundary length. Minimizing L during back propagation length Meaning that the curve segments surrounding the same area are minimized, i.e. the curve is the smoothest, thereby ensuring the smoothness of the prediction boundaries.
Make up for the loss function L in area area In the above, the integration region is a boundary point set P (x, y) =0, and since P is defined in a discrete space, its value is not 0 because the ideal value of P is only positive and negative. Therefore, the neural network segmentation model output P needs to be as followsThe formula is converted:
wherein H (x) is a Heaviside function, which is defined as follows:
after the conversion by the Heaviside function,the value of 0 in the target region and 1 in the background region, so the boundary point set P (x, y) =0 can be expressed as +.>The set of points within the image region that have a gradient other than 0, i.e. the set of boundary points, can be defined as: />Wherein->Points where the gradient mode length is greater than 0, points where the gradient is not 0;
thus, the length loss function L length Instead of the following formula may be used:
it should be noted that, the derivative region of the Heaviside function at x=0 is infinite, which may cause a problem that the gradient of the neural network segmentation model cannot be updated in the training process, and for this embodiment, the following function is used to approximate the Heaviside function:
as coefficients, for scaling the weight size of x, in the comparison experiment of cross entropy and the function ++>Let 1, in ablation experiments, +.>Setting to 0.25, the weight has a certain influence on the experiment;
in the back propagation process, the following function is used instead of the derivative of the Heaviside function, i.e., the dirac function:
。
area compensation loss measurement
In order to solve the problem that the length loss metric may be over-constrained in dividing the image, resulting in the disappearance of the division target in the over-division prediction map, and the problem that the boundary blurring target may be over-divided inwards, an area compensation loss function L is defined area An outward "offset" loss is provided during segmentation to address this problem.
Wherein the area compensates the loss function L area The definition is as follows:
where w is the image width.
Smoothness loss function based on curve evolution
The smoothness loss function ConvLetLoss calculation based on curve evolution is adopted and defined as:
where α is the length loss function L length For enhancing performance over distributed different data sets; beta is the area compensation loss function L area Is a negative number. In order to ensure consistency of the value interval of the content loss function and the length loss function, in a smoothness loss function ConvLetLoss based on curve evolution, weighting operation with the weight of 4 times of image width is carried out on the length loss function.
Other Loss functions such as cross entropy and Dice Loss can be used for experiments on adventitia data in the left ventricle of ACDC2017, but experiments on small areas prone to misclassification and border rough areas show that the method provided herein is better in overall performance on evaluation indexes such as Dice coefficients, HD, JSC, APD and the like. The length loss measurement L can also be added to the original loss function length And area compensation loss metric L area The result shows that the method provided by the patent performs better on the evaluation index than the original loss function method, and proves that the L provided in the embodiment length And L area To improve the effective part of the segmentation accuracy.
Example 2
The embodiment provides a deep neural network segmentation model based on smoothness constraint, and the smoothness loss calculation method based on curve evolution described in embodiment 1 is adopted as a loss function of the neural network segmentation model.
Example 3
The embodiment provides a deep neural network segmentation device based on smoothness constraint, which comprises
A memory; and
a processor coupled to the memory, the processor configured to implement the smoothness constraint-based deep neural network segmentation model of embodiment 2 when executing instructions based on instructions stored in the memory.
The memory may include, for example, system memory, fixed nonvolatile storage media, and the like. The system memory stores, for example, an operating system, application programs, boot Loader (Boot Loader), and other programs.
The apparatus may also include an input-output interface, a network interface, a storage interface, etc. These interfaces and the memory and processor may be connected by a bus, for example. The input/output interface provides a connection interface for input/output devices such as a display, a mouse, a keyboard, a touch screen and the like. The network interface provides a connection interface for various networking devices. The storage interface provides a connection interface for external storage devices such as an SD card and a U disk.
It will be appreciated by those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-non-transitory readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Claims (10)
1. A smoothness loss calculation method based on curve evolution for a neural network segmentation model, comprising:
in image segmentation, a content loss function L based on distance penalty is adopted content Performing loss measurement on pixel points of the segmented image;
in image prediction, an unsupervised length loss function L is adopted length Predicting the boundary length of the Mask; the method comprises the steps of,
in the image segmentation process, an area compensation loss function L is adopted area Make up for the length loss function L length Over-segmentation and prediction due to over-constraint;
the smoothness loss function convLetLoss calculation based on curve evolution is adopted and defined as:
where α is the length loss function L length Beta is the area compensation loss function L area W is the image width.
2. The curve evolution-based smoothness loss calculation method according to claim 1, further comprising:
the neural network segmentation model initializes the true contour labels of the image to the SDF form before performing the image segmentation task.
3. The method for calculating smoothness loss based on curve evolution according to claim 2, wherein the initializing process is implemented by using a distance change function:
setting the pixel value of the background as 0 and the pixel value of the segmentation target area as 1 in the original contour label;
calculating the distance between each pixel in the contour label and the nearest 0-value pixel;
the magnitude of the pixel loss is positively correlated to the distance from the contour label boundary.
4. The method for calculating smoothness loss based on curve evolution according to claim 2, wherein the activation function of the last output layer of the neural network segmentation model is set as the LeakyRelu function.
5. The curve evolution-based smoothness loss calculation method according to claim 1, wherein said distance penalty-based content loss function L content The definition is as follows:
let Ω e R be the image area,: omega-R is the segmentation result output by the neural network segmentation model, < > I->: omega-R is a distance graph after the manual drawing result is converted.
6. The curve evolution-based smoothness loss calculation method of claim 5, wherein: the length loss function L length The definition is as follows:
denoted as gradient,is->The result of the Heaviside function transformation is:
wherein H (x) is a Heaviside function, which is defined as follows:
。
7. the curve evolution-based smoothness loss calculation method of claim 6, wherein: in the neural network segmentation model training process, the following functions are used to approximate the Heaviside function:
as a coefficient, a weight size for scaling x;
in the back propagation process, a dirac function is usedInstead of the derivative of the Heaviside function, wherein:
。
8. the method for calculating smoothness loss based on curve evolution according to claim 6,characterized in that the area compensation loss function L area The definition is as follows:
where w is the image width.
9. A depth neural network segmentation model based on smoothness constraint is characterized in that: a method for calculating smoothness loss based on curve evolution according to claims 1-8 as a loss function of a neural network segmentation model.
10. The utility model provides a deep neural network segmentation device based on smoothness constraint which characterized in that:
a memory; and
a processor coupled to the memory, the processor configured to implement the smoothness constraint-based deep neural network segmentation model of claim 9 when executing instructions based on instructions stored in the memory.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310910962.XA CN117036695B (en) | 2023-07-24 | 2023-07-24 | A Deep Neural Network Image Segmentation Method Based on Smoothness Constraints |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310910962.XA CN117036695B (en) | 2023-07-24 | 2023-07-24 | A Deep Neural Network Image Segmentation Method Based on Smoothness Constraints |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117036695A true CN117036695A (en) | 2023-11-10 |
| CN117036695B CN117036695B (en) | 2025-11-18 |
Family
ID=88625384
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310910962.XA Active CN117036695B (en) | 2023-07-24 | 2023-07-24 | A Deep Neural Network Image Segmentation Method Based on Smoothness Constraints |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117036695B (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101013502A (en) * | 2006-12-29 | 2007-08-08 | 浙江大学 | Method for correcting measurement offset caused by tissue section image segmentation |
| DE102008057746A1 (en) * | 2008-11-17 | 2010-05-20 | Technische Universität Berlin | Image data generating method for displaying computer generated object on displaying device, involves rendering divided polygon area networks with shaded value for receiving image data in display device |
| US20190130275A1 (en) * | 2017-10-26 | 2019-05-02 | Magic Leap, Inc. | Gradient normalization systems and methods for adaptive loss balancing in deep multitask networks |
| CN112561860A (en) * | 2020-11-23 | 2021-03-26 | 重庆邮电大学 | BCA-UNet liver segmentation method based on prior shape constraint |
| US20210264207A1 (en) * | 2020-02-26 | 2021-08-26 | Adobe Inc. | Image editing by a generative adversarial network using keypoints or segmentation masks constraints |
| CN115131557A (en) * | 2022-05-30 | 2022-09-30 | 沈阳化工大学 | Lightweight segmentation model construction method and system based on activated sludge image |
-
2023
- 2023-07-24 CN CN202310910962.XA patent/CN117036695B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101013502A (en) * | 2006-12-29 | 2007-08-08 | 浙江大学 | Method for correcting measurement offset caused by tissue section image segmentation |
| DE102008057746A1 (en) * | 2008-11-17 | 2010-05-20 | Technische Universität Berlin | Image data generating method for displaying computer generated object on displaying device, involves rendering divided polygon area networks with shaded value for receiving image data in display device |
| US20190130275A1 (en) * | 2017-10-26 | 2019-05-02 | Magic Leap, Inc. | Gradient normalization systems and methods for adaptive loss balancing in deep multitask networks |
| US20210264207A1 (en) * | 2020-02-26 | 2021-08-26 | Adobe Inc. | Image editing by a generative adversarial network using keypoints or segmentation masks constraints |
| CN112561860A (en) * | 2020-11-23 | 2021-03-26 | 重庆邮电大学 | BCA-UNet liver segmentation method based on prior shape constraint |
| CN115131557A (en) * | 2022-05-30 | 2022-09-30 | 沈阳化工大学 | Lightweight segmentation model construction method and system based on activated sludge image |
Non-Patent Citations (2)
| Title |
|---|
| 朱恰;王建;刘星雨;周再文;马紫雯;高贤君;: "基于改进MRF的遥感影像建筑物精提取", 计算机与现代化, no. 07, 15 July 2020 (2020-07-15) * |
| 郭斯羽;胡萍萍;唐璐;温和;刘敏;: "基于区域重构的树状骨架快速去毛刺方法", 电子测量与仪器学报, no. 04, 15 April 2020 (2020-04-15) * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117036695B (en) | 2025-11-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10970844B2 (en) | Image segmentation method and device, computer device and non-volatile storage medium | |
| KR102073873B1 (en) | Method for semantic segmentation and apparatus thereof | |
| CN111833306B (en) | Defect detection method and model training method for defect detection | |
| CN108596053B (en) | A vehicle detection method and system based on SSD and vehicle pose classification | |
| JP5174040B2 (en) | Computer-implemented method for distinguishing between image components and background and system for distinguishing between image components and background | |
| CN109472792B (en) | An Image Segmentation Method Combining Local Energy Functional with Local Entropy and Non-convex Regular Term | |
| CN109035274B (en) | Document image binarization method based on background estimation and U-shaped convolutional neural network | |
| CN111986126B (en) | Multi-target detection method based on improved VGG16 network | |
| KR102204956B1 (en) | Method for semantic segmentation and apparatus thereof | |
| CN106296675A (en) | A kind of dividing method of the uneven image of strong noise gray scale | |
| CN111160358B (en) | Image binarization method, device, equipment and medium | |
| CN113781515A (en) | Cell image segmentation method, device and computer readable storage medium | |
| CN117974705A (en) | Defect image segmentation method, device and electronic equipment based on multi-threshold | |
| CN104077765A (en) | Image segmentation device, image segmentation method and program | |
| CN110910332B (en) | A dynamic blur processing method for visual SLAM system | |
| CN111383191B (en) | Image processing method and device for vascular fracture repair | |
| CN114154575B (en) | Recognition model training method, device, computer equipment and storage medium | |
| CN114511743B (en) | Detection model training, target detection method, device, equipment, medium and product | |
| CN111815652A (en) | A method and apparatus for multi-scale local threshold segmentation of images | |
| CN117036695B (en) | A Deep Neural Network Image Segmentation Method Based on Smoothness Constraints | |
| CN108182684B (en) | A kind of image segmentation method and device based on weighted kernel function fuzzy clustering | |
| CN114240988A (en) | Image segmentation method based on nonlinear scale space | |
| CN113343979A (en) | Method, apparatus, device, medium and program product for training a model | |
| CN114005120A (en) | License plate character cutting method, license plate recognition method, device, equipment and storage medium | |
| CN113112515A (en) | An Evaluation Method of Pattern Image Segmentation Algorithm |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |