[go: up one dir, main page]

CN111210436A - Lens segmentation method, device and storage medium - Google Patents

Lens segmentation method, device and storage medium Download PDF

Info

Publication number
CN111210436A
CN111210436A CN201911350333.6A CN201911350333A CN111210436A CN 111210436 A CN111210436 A CN 111210436A CN 201911350333 A CN201911350333 A CN 201911350333A CN 111210436 A CN111210436 A CN 111210436A
Authority
CN
China
Prior art keywords
image
segmentation
lens
roi
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911350333.6A
Other languages
Chinese (zh)
Other versions
CN111210436B (en
Inventor
曹桂平
刘江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Cixi Institute of Biomedical Engineering CIBE of CAS
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Cixi Institute of Biomedical Engineering CIBE of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Cixi Institute of Biomedical Engineering CIBE of CAS filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201911350333.6A priority Critical patent/CN111210436B/en
Publication of CN111210436A publication Critical patent/CN111210436A/en
Application granted granted Critical
Publication of CN111210436B publication Critical patent/CN111210436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及图像分析技术领域,具体涉及一种晶状体分割方法、装置及存储介质。包括:从原始图像中获取包含晶状体的ROI图像;基于所述ROI图像判断图像分割难度,并且,基于所述ROI图像通过实时分割网络获取晶状体分割结果。通过预处理算法提取目标区域,大幅降低了待分割图像大小,减少了冗余信息的干扰,降低了分割网络算法的计算量。

Figure 201911350333

The invention relates to the technical field of image analysis, in particular to a lens segmentation method, device and storage medium. The method includes: obtaining a ROI image including the lens from the original image; judging the difficulty of image segmentation based on the ROI image, and obtaining a lens segmentation result through a real-time segmentation network based on the ROI image. The target area is extracted by the preprocessing algorithm, which greatly reduces the size of the image to be segmented, reduces the interference of redundant information, and reduces the calculation amount of the segmentation network algorithm.

Figure 201911350333

Description

Lens segmentation method, device and storage medium
Technical Field
The invention relates to the technical field of image analysis, in particular to a lens segmentation method, a lens segmentation device and a storage medium.
Background
Cataract is a disease that causes visual impairment due to clouding of the crystalline lens caused by denaturation of proteins in the crystalline lens of the eye.
Anterior Segment Optical Coherence Tomography (AS-OCT) is a non-invasive high resolution Anterior Segment imaging technique that is often used to aid in the diagnosis of ophthalmic diseases such AS cataract and glaucoma. The density of crystalline lens is an important index for measuring the severity of cataract and other diseases, and the structural segmentation of crystalline lens is an important basis and precondition for calculating the density of crystalline lens. Currently, a Lens opacity classification System (LOCS III) classification standard is used internationally to classify living cataracts to determine the extent and degree of Lens opacity. The disadvantages of this method are mainly: the classification needs human intervention, the structure classification result depends heavily on doctor experience, and the classification quality difference is obvious.
The invention patent application with the application publication number of CN110176007A and the application publication date of 2019, 8 and 27 discloses a lens segmentation method, which can realize automatic segmentation of a lens structure by presetting a neural network model and a shape template, thereby reducing the labor cost and improving the accuracy of the segmentation of the lens structure. The method needs to be trained in advance to obtain a U-shaped full convolution neural network model and a shape template, stability and precision depend on training results, the calculated amount is large, and the algorithm operation time is long.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides a lens dividing method, comprising:
acquiring an ROI image containing the crystalline lens from the original image;
the image segmentation difficulty is judged based on the ROI image, and a lens segmentation result is obtained through a real-time segmentation network based on the ROI image.
The target area is extracted through the preprocessing algorithm, the size of the image to be segmented is greatly reduced, the interference of redundant information is reduced, and the calculation amount of the segmentation network algorithm is reduced.
Further, the acquiring the ROI image containing the lens from the original image includes: the original image is filtered and then input to a ShuffleSeg network to obtain a segmentation result image, the left, right, upper and lower boundaries of the ROI image are obtained from the segmentation result image, and the ROI image is intercepted from the original image according to the obtained left, right, upper and lower boundaries. The ROI image is extracted by adopting the image segmentation idea, the problems that the traditional method and a deep learning detection algorithm are poor in robustness and easy to exceed a detection boundary and the like are solved, and stability and precision are obviously improved.
Further, the determining the image segmentation difficulty based on the ROI image comprises: and inputting the ROI image into a ShuffleNet network for coding, inputting the coded feature map into a SkipNet network for decoding, fusing the coding result and the decoding result after respectively averaging and pooling, and connecting a full connection layer to obtain the image segmentation difficulty. The segmentation difficulty level of the lens can be used as an evaluation of the confidence of the image segmentation result.
Further, the obtaining of the crystalline segmentation result through the real-time segmentation network based on the ROI image includes: the ROI image is input into a ShuffleNet network coding to extract image features, and the SkipNet network decoding is adopted to perform upsampling on the basis of the extracted image features to calculate a final class probability map. The segmentation network based on ShuffleNet and SkipNet can greatly reduce the calculated amount and realize real-time segmentation while ensuring certain precision, and has great significance for practical application.
Further, before the ROI image containing the crystalline lens is obtained from the original image, whether the original image can be segmented or not is judged, and if the original image cannot be segmented, the image segmentation difficulty is judged to be the largest. The quality control module can improve the segmentation efficiency if the image can be segmented.
Further, the determining whether the original image is divisible includes: and inputting the original image into a ShuffleNet network for coding, and connecting a full connection layer after adding an average pooling layer in the coded output layer to output a judgment result.
The present invention also provides a lens dividing system, comprising:
the ROI image extraction module is used for acquiring an ROI image containing the crystalline lens from the original image;
the segmentation difficulty grading module is used for judging the image segmentation difficulty based on the input ROI image;
the real-time segmentation module is used for carrying out real-time lens segmentation on the input ROI image;
the ROI image extraction module extracts an ROI image from an input original image and inputs the ROI image into the segmentation difficulty grading module and the real-time segmentation module.
In the preprocessing process of extracting the region of interest, the image segmentation idea is adopted for boundary searching, so that the problems of poor robustness, easiness in exceeding of a detection boundary and the like in the traditional method and a deep learning detection algorithm are solved, and the stability and the precision are obviously improved. Meanwhile, the target area is extracted through the preprocessing algorithm, the size of the image to be segmented is greatly reduced, the interference of redundant information is reduced, and the calculated amount of the segmentation network algorithm is reduced.
Preferably, the system further comprises a whether-divisible judging module, which is used for judging whether the input original image can be divided; the segmentation judging module enables the ROI image extracting module to extract an image ROI area when judging that the input original image can be segmented, and judges that the image segmentation difficulty is the largest when judging that the input original image cannot be segmented.
The present invention also provides a computer-readable storage medium, comprising computer-readable instructions, which, when read and executed by a processor, cause the processor to perform the operations of any of claims 1-6.
The invention has the following beneficial effects:
(1) the method comprises four modules of quality control of image segmentation, region of interest extraction, segmentation difficulty level prediction and structure segmentation. The method can reduce the influence of human factors while ensuring the segmentation precision, realizes the repeatability of segmentation, greatly improves the segmentation efficiency, and has important significance for the diagnosis of cataract diseases.
(2) In the preprocessing process of extracting the region of interest, the image segmentation idea is adopted for boundary searching, so that the problems that the traditional method and the deep learning detection algorithm are poor in robustness and easy to exceed the detection boundary and the like are solved, and the stability and the precision are obviously improved. Meanwhile, the target area is extracted through the preprocessing algorithm, the size of the image to be segmented is greatly reduced, the interference of redundant information is reduced, and the calculated amount of the segmentation network algorithm is reduced.
(3) The ShuffleSeg segmentation network based on ShuffleNet and SkipNet can greatly reduce the calculated amount and realize real-time segmentation while ensuring certain precision, and has great significance for practical application.
(4) The two parts of operation of segmentation difficulty level prediction and structure segmentation in the invention share the features extracted by the ShuffLeNet feature extraction network, on the basis, the extracted features can be effectively utilized, different tasks are efficiently realized by classifying and segmenting two network branches, and the operation time of the algorithm is greatly saved.
(5) The invention aims at the segmentation framework of the AS-OCT image, has strong anti-interference capability and good generalization capability, and the concept of the segmentation framework can be conveniently applied to other image segmentation fields.
Drawings
FIG. 1 is a general flow chart of lens structure segmentation of AS-OCT images;
FIG. 2 is a diagram of a network structure of a partitionable determination module;
FIG. 3 is a flow chart for acquiring an ROI image;
FIG. 4 is a schematic diagram of a process for obtaining an ROI image;
FIG. 5 is a network architecture diagram of a segmentation difficulty ranking module;
FIG. 6 is a schematic flow chart of lens structure segmentation;
FIG. 7 is a network architecture diagram of a real-time partitioning module;
FIG. 8 is a schematic diagram of the ShuffleNet unit.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that the conventional terms should be interpreted as having a meaning that is consistent with their meaning in the relevant art and this disclosure. The present disclosure is to be considered as an example of the invention and is not intended to limit the invention to the particular embodiments.
Example one
A lens splitting method, comprising: acquiring an ROI image containing the crystalline lens from the original image; the image segmentation difficulty is judged based on the ROI image, and a lens segmentation result is obtained through a real-time segmentation network based on the ROI image. The target area is extracted through the preprocessing algorithm, the size of the image to be segmented is greatly reduced, the interference of redundant information is reduced, and the calculation amount of the segmentation network algorithm is reduced. In the image segmentation difficulty judgment and the process of obtaining the lens segmentation result, a real-time segmentation network which encodes by using ShuffleNet and decodes by using SkipNet is adopted. In an actual application scenario, due to the fact that a lens structure of a patient is diseased or an intraocular lens and the like, a phenomenon that the lens structure is missing or the structure does not exist may occur in a shot AS-OCT image, and therefore structure segmentation is affected. For this reason, in the present embodiment, before acquiring an ROI image including the lens from the original image, it is automatically determined whether the original image is divisible.
Fig. 1 shows the overall flow of the method of this embodiment, which includes four steps:
step one, judging whether the AS-OCT original image can be segmented or not.
And step two, acquiring an ROI image containing the crystalline lens from the original image. As the AS-OCT original image is large in size (e.g., 2130 multiplied by 1864), the left and right side regions of the crystalline lens in the image are redundant information, which can interfere with structure segmentation and increase algorithm processing difficulty and calculation amount. And searching the boundary of the image crystalline lens region through a preprocessing algorithm, and extracting a key region of the image to be segmented while reducing noise. On the premise of not influencing the structure segmentation, the size and the range of the image are greatly reduced, and the method is favorable for subsequent network segmentation and reduction of the calculated amount. In addition, the AS-OCT lesion image is very complex, and the traditional image processing method is easily influenced by image quality in the process of finding the boundary, so that the problems of inaccurate or wrong boundary finding and the like occur, and the AS-OCT lesion image is difficult to cope with various practical application scenes.
And step three, taking the ROI image obtained in the step two as input, obtaining the segmentation difficulty level of the original image, and taking the segmentation difficulty level as the evaluation of the confidence coefficient of the image segmentation result.
And step four, taking the ROI image obtained in the step two as input, obtaining a lens structure segmentation result through a real-time segmentation network, and obtaining a visual result of image segmentation through post-processing.
Step one, an original image is taken as input, coding is carried out through a shuffle network, an Average Pooling Layer (AvgPool) is added to a coded output Layer, and then a full connected Layer (FC) is connected, so that image partitionable and inseparable results can be obtained, and the network structure is shown in fig. 2. In this embodiment, if it is determined that the image is not divisible, the image division difficulty level may be directly determined to be a 0 level indicating the maximum division difficulty.
The main purpose of the second step is: an ROI image (shown in an ROI image in FIG. 3) is extracted from the original image, so that the size of the image input into the segmentation network is reduced, the interference of redundant information is reduced, and the segmentation speed of the algorithm is improved. And step two, taking the original image as Input, and filtering the original image (the Input image in fig. 3) by adopting median filtering so as to reduce image noise caused by acquisition equipment and the like, wherein the size of a convolution kernel is 5 × 5. On this basis, the image was input into the ShuffleSeg network, with an input image size of 240 × 120. The image segmentation result obtained by the ShuffleSeg network is shown in the preprocessed segmentation image in fig. 4, in which the lens region is the foreground and the other regions are the background. The vertical coordinate values of the upper and lower boundaries of the lens capsule, which are respectively denoted as ytop and ybottom, can be obtained by searching from the top downwards and from the bottom upwards of the segmentation result image. The coordinates of the central position of the crystalline lens can be calculated by the upper and lower boundaries of the crystalline lens capsule, and the non-background area is searched from the two sides of the central position to the center, so as to obtain the abscissa values of the positions of the left and right boundaries, which are marked as xleft and xright. In order to ensure that the ROI image contains the complete lens capsule region, the upper and lower boundaries need to be moved upward and downward by a certain distance when the image is extracted, so as to intercept the image of the region of interest, as shown in the ROI image in fig. 4.
Step three, the ROI image obtained in the step two is used as input, and ShuffleNet is adopted for coding, so that feature extraction is realized; in addition, the extracted feature map is taken as input, and SkipNet is adopted for decoding; the output results of encoding and decoding are respectively merged after being averaged and pooled, then two full-connected layers are connected, and finally the grading result of the image segmentation difficulty is obtained (the grading result in the embodiment comprises 1-5 grades, and the grading is easier to segment when the grade is larger), and the network structure is shown in fig. 5.
Step four, the ROI image obtained in step two is used as an input (shown in fig. 4), and the size of the image is uniformly scaled to 256 × 512, in this embodiment, a four-classification ShuffleSeg real-time segmentation network is used to implement segmentation of the lens structure, and the segmentation process is shown in fig. 6. The real-time semantic segmentation Network ShuffleSeg comprises two processes of encoding and decoding, and the Network structure is shown as ShuffleSeg Network Architecture in FIG. 7 and comprises an encoding part and a decoding part. The encoding process is based on a ShuffleNet network and is mainly responsible for extracting image features. The network reduces the calculation amount through packet convolution (packet convolution) and maintains excellent accuracy by using a channel aliasing (channel blurring) method, thereby improving the network performance. Initially 3 x 3 convolutions are used, with a step size of 2 downsampling (Conv 1[2 x2, #24], number of output channels 24), one activation function (Relu) per convolutional layer, followed by 2 x2 Max Pooling (Max Pooling [2 x2 ]). Then, the algorithm has 3 stages (Stage 2, Stage 3, Stage 4), each of which is composed of a plurality of ShuffleNet units (SU: ShuffleNet Unit). The 2 nd and 4 th phases consist of 3 shefflonet units (SUs = 3), and the 3 rd phase consists of 7 shefflonet units (SUs = 7). The number of output channels in the 2 nd stage is 240, the number of output channels in the 3 rd stage is 480, and the number of output channels in the 4 th stage is 960. The decoding process is based on a SkipNet network and mainly takes charge of up-sampling and calculating a final class probability graph. SkipNet improves accuracy from high resolution feature maps. The output of Stage 4 (Stage 4) is represented as a fractional Layer (Score Layer), i.e. a probability map, by a 1 × 1 convolution (1 × 1 Conv), thereby converting the channel to the number of classes. The output of the fractional layer is upsampled by a factor of 2 (x 2 Upsampling) with a convolution kernel size of 4 x 4, resulting in an upsampled image. The outputs of Stage 2 (Stage 2) and Stage 3 (Stage 3) were used as input interlayers (Feed 1 Layer, Feed2 Layer), with 1 × 1 convolution (1 × 1 Conv) respectively, to improve heatmap resolution. The upsampled image output of the fractional layer was element superimposed with the heat map of stage 3 to obtain the Use intermediate layer (denoted as Use Feed 1). Use Feed1 was upsampled by a step size of 2 and a convolution kernel size of 4 x 4 to obtain a Score Layer (denoted Score Layer 2). Element stacking of stage 2 heatmaps (heatmaps) and Score Layer2 resulted in the Use of an intermediate Layer (denoted as Use Feed 2). Finally, by transpose convolution initialized by bilinear Upsampling, with a convolution kernel size of 16 × 16 and a step size of 8 (x 8 Upsampling), a final probability map matching the input size can be obtained.
The design of the ShuffleNet Unit refers to a residual error network (ResNet) and is divided into two basic units, such as SU: ShuffleNet Unit in FIG. 7 and FIG. 8. The first unit operation: the step size is 2 and the convolution kernel is 3 × 3 Average pooling (AVG Pool). The second unit operation: 1 × 1 dot-by-dot convolution (GC), data Normalization with BN layer (Batch Normalization, BN), and channel rearrangement (1 × 1 GC + Shuffle) using ReLU activation function; then, performing deep Convolution (DWC) with the step length of 2 and the Convolution kernel size of 3 multiplied by 3, and performing data normalization (BN layer) after Convolution; the output data after the deep convolution is subjected to 1 × 1 point-by-point convolution and packetization (1 × 1 GC), and data normalization (BN layer) is performed after the convolution and packetization. Finally, the results of the two calculation units are merged by channel (Concat) and the processing result is obtained using the ReLU activation function.
The third step and the fourth step share the features extracted by the ShuffLeNet feature extraction network, and can be executed sequentially or synchronously. On the basis, the extracted features can be effectively utilized, different tasks are efficiently realized through classifying and dividing the two network branches, and the algorithm running time is greatly saved.
Preferably, the network model for classifying the image segmentation difficulty in the third step of this embodiment is established on the network for segmenting the lens structure, and the establishment process is as follows: and 3.1, collecting sample data and marking a reference dividing line for each sample data. The sample data is the ROI image obtained in the step two. The labeling of the sample data in step S3.1 is to mark out the real segmentation line of the lens structure in the ROI image as an evaluation criterion for evaluating whether the segmentation result in step S3.2 is accurate.
And 3.2, inputting the sample data into the network adopted in the fourth step to segment the lens structure in the ROI image, and obtaining a segmentation result.
And 3.3, calculating the segmentation comprehensive error of the automatic segmentation structure of each sample data. The error can be obtained by calculating an average Pixel Distance (Pixel Distance) between the segmentation result and the label for each sample data. By calculating the shortest distance between each boundary point on each segmentation line in the segmentation result and the marked real segmentation line as the error of a single boundary point, the mean value and the variance of the errors of the boundary points on the whole segmentation line can be calculated as the single error of the segmentation line. The segmentation comprehensive error of the segmentation result can be counted based on the single error of each segmentation line, for example, the segmentation comprehensive error in this embodiment:
Figure 898464DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
for a single error of the boundary dividing line on the lens of sample data i,
Figure 753288DEST_PATH_IMAGE004
the mean of the single term errors of the boundary dividing lines on the lens for all sample data,
Figure DEST_PATH_IMAGE005
the weighted value of the single error of the lower boundary dividing line of the crystalline lens.
Figure 470708DEST_PATH_IMAGE006
Is the single term error of the lower boundary dividing line of the lens of sample data i,
Figure DEST_PATH_IMAGE007
the mean of the single term errors of the lower boundary dividing lines of the lens for all sample data,
Figure 272442DEST_PATH_IMAGE008
the weighted value of the single error of the lower boundary dividing line of the crystalline lens.
Figure DEST_PATH_IMAGE009
Is a single error of the boundary dividing line on the cortical layer of sample data i,
Figure 352394DEST_PATH_IMAGE010
the mean of the single errors of the boundary dividing lines on the cortical layer for all sample data,
Figure DEST_PATH_IMAGE011
the weight value of the single error of the boundary dividing line under the cortical layer.
Figure 82846DEST_PATH_IMAGE012
Is a single error of a dividing line of the lower boundary of the cortical layer of the sample data i,
Figure DEST_PATH_IMAGE013
the mean of the single errors of the boundary dividing lines under the cortical layer of all sample data,
Figure 881037DEST_PATH_IMAGE014
the weight value of the single error of the boundary dividing line under the cortical layer.
Figure DEST_PATH_IMAGE015
Is the single term error of the lower boundary dividing line of the lens nucleus of sample data i,
Figure 17621DEST_PATH_IMAGE016
for all sample dataThe mean of the monomial errors of the lower boundary dividing lines of the lens nucleus,
Figure DEST_PATH_IMAGE017
the weighted value of the monomial error of the boundary dividing line under the lens nucleus.
Figure 93024DEST_PATH_IMAGE018
Is the single term error of the boundary dividing line on the lens nucleus of sample data i,
Figure DEST_PATH_IMAGE019
the mean of the single errors of the boundary dividing lines on the lens nucleus for all sample data,
Figure 289650DEST_PATH_IMAGE020
the weighted value of the monomial error of the boundary dividing line on the lens nucleus.
And 3.4, determining the segmentation difficulty level of the sample data according to the segmentation comprehensive error of the automatic segmentation result of each sample data. The number of the segmentation difficulty levels of the sample data and the proportion of each level can be determined according to actual needs and the comprehensive error distribution of all the sample data. Taking five stages in total of the segmentation difficulty grades as an example, according to the segmentation comprehensive error statistical results and distribution of all sample data, the sample data can be divided into the following stages according to 1 stage, 5%, 2 stage, 15%, 3 stage, 45% and 4: 25% and 5 levels and 10%, and dividing the segmentation difficulty level into 1-5 levels, so that the range of the segmentation comprehensive error E corresponding to each level can be determined. The segmentation difficulty corresponding to the 1-5 levels is reduced in sequence, namely the 5 levels represent that the segmentation difficulty is easiest, the error of the segmentation result is small, and the reliability of the segmentation result is high; level 1 indicates that the segmentation difficulty is high, the error of the segmentation result is high, and the reliability of the segmentation result is low. After determining the comprehensive error range corresponding to each segmentation difficulty level, the segmentation difficulty level of each sample data can be determined by the segmentation comprehensive error. Training is carried out based on the obtained sample data and the segmentation difficulty level corresponding to the sample data, and a segmentation difficulty judging network is established to realize automatic judgment of the segmentation difficulty of the lens structure of the AS-OCT image.
And 3.5, establishing a segmentation difficulty judgment network based on each sample data and the segmentation difficulty grade thereof. And adding a parallel classification branch as a segmentation difficulty judgment network on the basis of the segmentation network adopted in the fourth step, so as to classify by using the effective characteristics obtained by automatically segmenting the network.
Example two
A lens splitting system comprising four components:
and the judging module whether to divide the input original image or not. And judging whether the input AS-OCT original image can be segmented or not. The module corresponds to the first step of the first embodiment, and can be used for realizing the first step of the first embodiment.
And the ROI image extraction module is used for acquiring an ROI image containing the crystalline lens from the original image. And searching the boundary of the lens region of the original image, and acquiring an ROI image containing the lens. The module corresponds to the second step of the first embodiment and can be used for realizing the second step of the first embodiment.
And the segmentation difficulty grading module is used for judging the image segmentation difficulty based on the input ROI image. And taking the ROI image output by the ROI image extraction module as input, and judging the segmentation difficulty level of the image. The module corresponds to the third step of the first embodiment and can be used for realizing the third step of the first embodiment.
And the real-time segmentation module is used for carrying out real-time lens segmentation on the input ROI image. And taking the ROI image output by the ROI image extraction module as input, and obtaining a lens structure segmentation result through a real-time segmentation network. The module corresponds to the step four of the first embodiment, and can be used for implementing the step four of the first embodiment.
EXAMPLE III
A computer-readable storage medium comprising computer-readable instructions that, when read and executed by a processor, cause the processor to perform the operations of embodiment one.
Although embodiments of the present invention have been described, various changes or modifications may be made by one of ordinary skill in the art within the scope of the appended claims.

Claims (9)

1.一种晶状体分割方法,其特征在于,包括:1. a lens segmentation method, is characterized in that, comprises: 从原始图像中获取包含晶状体的ROI图像;Obtain the ROI image containing the lens from the original image; 基于所述ROI图像判断图像分割难度,并且,基于所述ROI图像通过实时分割网络获取晶状体分割结果。The difficulty of image segmentation is determined based on the ROI image, and a lens segmentation result is obtained through a real-time segmentation network based on the ROI image. 2.根据权利要求1所述的一种晶状体分割方法,其特征在于,所述从原始图像中获取包含晶状体的ROI图像包括:2. A lens segmentation method according to claim 1, wherein the obtaining the ROI image including the lens from the original image comprises: 对原始图像进行滤波处理后输入到ShuffleSeg网络获得分割结果图像,从分割结果图像获取ROI图像的左、右、上、下边界,根据获取的左、右、上、下边界从原始图像中截取出ROI图像。After filtering the original image, input it to the ShuffleSeg network to obtain the segmentation result image, obtain the left, right, upper and lower boundaries of the ROI image from the segmentation result image, and cut out the original image according to the obtained left, right, upper and lower boundaries. ROI image. 3.根据权利要求1所述的一种晶状体分割方法,其特征在于,所述基于所述ROI图像判断图像分割难度包括:3. A lens segmentation method according to claim 1, wherein the judging the difficulty of image segmentation based on the ROI image comprises: 将ROI图像输入到ShuffleNet网络进行编码并且将编码后的特征图输入到SkipNet网络进行解码,对编码结果和解码结果分别平均池化后进行融合之后接全连接层以获得图像分割难度。The ROI image is input into the ShuffleNet network for encoding and the encoded feature map is input into the SkipNet network for decoding. The encoding results and the decoding results are average pooled and fused, and then connected to the fully connected layer to obtain the difficulty of image segmentation. 4.根据权利要求1所述的一种晶状体分割方法,其特征在于,所述基于ROI图像通过实时分割网络获取晶体状分割结果,包括:4. a kind of lens segmentation method according to claim 1, is characterized in that, described obtaining lens-like segmentation result through real-time segmentation network based on ROI image, comprises: 将ROI图像输入到ShuffleNet网络编码以提取图像特征,基于提取的图像特征采用SkipNet网络解码进行上采样以计算最终类别概率图。The ROI image is input into ShuffleNet network encoding to extract image features, and based on the extracted image features, SkipNet network decoding is used for upsampling to calculate the final class probability map. 5.根据权利要求1所述的一种晶状体分割方法,其特征在于:5. a kind of lens segmentation method according to claim 1, is characterized in that: 在所述从原始图像中获取包含晶状体的ROI图像之前,判断原始图像是否可分割,如果不可分割则判断图像分割难度最大。Before obtaining the ROI image including the lens from the original image, it is judged whether the original image can be divided, and if it is not divided, it is judged that the image division is the most difficult. 6.根据权利要求5所述的一种晶状体分割方法,其特征在于,所述判断原始图像是否可分割包括:6. A lens segmentation method according to claim 5, wherein the judging whether the original image can be segmented comprises: 将所述原始图像输入到ShuffleNet网络进行编码在编码后的输出层添加平均池化层之后连接全连接层以输出判断结果。Input the original image into the ShuffleNet network for encoding, add the average pooling layer to the encoded output layer, and then connect the fully connected layer to output the judgment result. 7.一种晶状体分割系统,其特征在于,包括:7. A lens segmentation system, comprising: ROI图像提取模块,用于从原始图像中获取包含晶状体的ROI图像;The ROI image extraction module is used to obtain the ROI image containing the lens from the original image; 分割难度分级模块,基于输入的ROI图像判断图像分割难度;The segmentation difficulty grading module determines the difficulty of image segmentation based on the input ROI image; 实时分割模块,对输入的ROI图像进行实时晶状体分割;The real-time segmentation module performs real-time lens segmentation on the input ROI image; 所述ROI图像提取模块从输入的原始图像中提取ROI图像并输入到所述分割难度分级模块和所述实时分割模块中。The ROI image extraction module extracts the ROI image from the input original image and inputs it into the segmentation difficulty grading module and the real-time segmentation module. 8.根据权利要求7所述的一种晶状体分割系统,其特征在于,还包括:8. A lens segmentation system according to claim 7, characterized in that, further comprising: 是否可分割判断模块,用于判断输入的原始图像是否可分割;Whether it can be divided into a judgment module, which is used to judge whether the input original image can be divided; 所述是否可分割判断模块,在判断输入的原始图像可分割时使所述ROI图像提取模块提取图像ROI区域,在判断输入的原始图像不可分割时判断图像分割难度最大。The divisible judging module enables the ROI image extraction module to extract the image ROI region when judging that the input original image is divisible, and judges that the image segmentation is the most difficult when judging that the input original image is indivisible. 9.一种计算机可读存储介质,其特征在于,包括计算机可读指令,当处理器读取并执行所述计算机可读指令时,使得所述处理器执行权利要求1-6中任一项所述的操作。9. A computer-readable storage medium, characterized by comprising computer-readable instructions, which, when a processor reads and executes the computer-readable instructions, cause the processor to execute any one of claims 1-6 described operation.
CN201911350333.6A 2019-12-24 2019-12-24 Lens segmentation method, device and storage medium Active CN111210436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911350333.6A CN111210436B (en) 2019-12-24 2019-12-24 Lens segmentation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911350333.6A CN111210436B (en) 2019-12-24 2019-12-24 Lens segmentation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111210436A true CN111210436A (en) 2020-05-29
CN111210436B CN111210436B (en) 2022-11-11

Family

ID=70789323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911350333.6A Active CN111210436B (en) 2019-12-24 2019-12-24 Lens segmentation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111210436B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418290A (en) * 2020-11-17 2021-02-26 中南大学 ROI (region of interest) region prediction method and display method of real-time OCT (optical coherence tomography) image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270447A1 (en) * 2013-03-13 2014-09-18 Emory University Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
US20180130213A1 (en) * 2015-05-18 2018-05-10 Koninklijke Philips N.V. Self-aware image segmentation methods and systems
CN110084828A (en) * 2019-04-29 2019-08-02 北京华捷艾米科技有限公司 A kind of image partition method, device and terminal device
CN110176008A (en) * 2019-05-17 2019-08-27 广州视源电子科技股份有限公司 Crystalline lens dividing method, device and storage medium
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A Salt Body Recognition Method Based on Deep Learning Semantic Boundary Enhancement
CN110507364A (en) * 2019-07-30 2019-11-29 中国医学科学院生物医学工程研究所 A kind of ophthalmic lens ultrasonic imaging method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270447A1 (en) * 2013-03-13 2014-09-18 Emory University Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
US20180130213A1 (en) * 2015-05-18 2018-05-10 Koninklijke Philips N.V. Self-aware image segmentation methods and systems
CN110084828A (en) * 2019-04-29 2019-08-02 北京华捷艾米科技有限公司 A kind of image partition method, device and terminal device
CN110176008A (en) * 2019-05-17 2019-08-27 广州视源电子科技股份有限公司 Crystalline lens dividing method, device and storage medium
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A Salt Body Recognition Method Based on Deep Learning Semantic Boundary Enhancement
CN110507364A (en) * 2019-07-30 2019-11-29 中国医学科学院生物医学工程研究所 A kind of ophthalmic lens ultrasonic imaging method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
卢林: ""虚拟结肠可视化的关键技术研究:全自动分割与基于外壁的虚拟展平"", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
王恩德等: "基于神经网络的遥感图像语义分割方法", 《光学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418290A (en) * 2020-11-17 2021-02-26 中南大学 ROI (region of interest) region prediction method and display method of real-time OCT (optical coherence tomography) image
CN112418290B (en) * 2020-11-17 2024-03-26 中南大学 ROI (region of interest) region prediction method and display method of real-time OCT (optical coherence tomography) image

Also Published As

Publication number Publication date
CN111210436B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN107369160B (en) A segmentation algorithm for choroidal neovascularization in OCT images
WO2021169128A1 (en) Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium
CN108764286B (en) Classification and identification method of feature points in blood vessel image based on transfer learning
CN102800089B (en) Main carotid artery blood vessel extraction and thickness measuring method based on neck ultrasound images
CN111046835A (en) Eyeground illumination multiple disease detection system based on regional feature set neural network
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN112991365B (en) Coronary artery segmentation method, system and storage medium
CN111951221A (en) A method for image recognition of glomerular cells based on deep neural network
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN109325942A (en) Fundus image structure segmentation method based on fully convolutional neural network
CN117078697B (en) A method for detecting fundus diseases based on cascade model fusion
CN112489060B (en) System and method for pneumonia focus segmentation
CN113902738A (en) A cardiac MRI segmentation method and system
CN111738992B (en) Method, device, electronic equipment and storage medium for extracting lung focus area
CN112884788A (en) Cup optic disk segmentation method and imaging method based on rich context network
CN114332278B (en) A deep learning-based OCTA image motion correction method
CN109003280A (en) Inner membrance dividing method in a kind of blood vessel of binary channels intravascular ultrasound image
CN109886346A (en) A Cardiac MRI Image Classification System
Wang et al. EE-Net: An edge-enhanced deep learning network for jointly identifying corneal micro-layers from optical coherence tomography
CN113343755A (en) System and method for classifying red blood cells in red blood cell image
CN118982492B (en) A jitter distortion correction image processing system for ophthalmic OCT
CN109325955B (en) Retina layering method based on OCT image
Mao et al. Deep learning with skip connection attention for choroid layer segmentation in oct images
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
CN115049608A (en) Full-automatic epicardial adipose tissue extraction system based on YOLO-V5 and U-Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant