CN113768461B - Fundus image analysis method, fundus image analysis system and electronic equipment - Google Patents
Fundus image analysis method, fundus image analysis system and electronic equipment Download PDFInfo
- Publication number
- CN113768461B CN113768461B CN202111073695.2A CN202111073695A CN113768461B CN 113768461 B CN113768461 B CN 113768461B CN 202111073695 A CN202111073695 A CN 202111073695A CN 113768461 B CN113768461 B CN 113768461B
- Authority
- CN
- China
- Prior art keywords
- leopard
- fundus
- region
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Evolutionary Computation (AREA)
- Animal Behavior & Ethology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ophthalmology & Optometry (AREA)
- Veterinary Medicine (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Evolutionary Biology (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The embodiment of the invention provides a fundus image analysis method, a fundus image analysis system and electronic equipment, wherein the fundus image analysis method comprises the following steps: acquiring a fundus image to be analyzed; performing pixel-level segmentation on the fundus image to obtain leopard print segmentation information for identifying whether each pixel belongs to a leopard print or a non-leopard print region; based on the fundus image segmentation, a plurality of interested areas with different influence degrees of leopard veins on vision are generated, and area segmentation information for marking the interested area of each pixel is obtained; quantitatively analyzing the fundus leopard according to the leopard segmentation information and the region segmentation information; according to the invention, the leopard print at the bottom of the eye is quantitatively analyzed more accurately according to the leopard print segmentation information and the region segmentation information, so that a diagnosis result is rapidly and accurately given; the doctor does not have to spend much effort and time learning and analyzing how to determine the extent of leopard print from the fundus image.
Description
Technical Field
The present invention relates to the field of medical data analysis, and more particularly, to analysis of fundus images using a neural network model, and more particularly, to a fundus image analysis method, system, and electronic apparatus.
Background
Fundus refers to the area of the back of the eye, including the anatomy of the retina, papilla, macula, and central retinal artery. In general, the condition of the fundus can be observed noninvasively, painlessly and quickly by a fundus camera, and diagnosis of the related diseases can be made. Because of the characteristics of convenience and no damage, and because various diseases of human bodies can be reflected on eyeground, disease screening through eyeground images has become an important means for diagnosing diseases such as diabetes, glaucoma and the like and managing chronic diseases.
Leopard-like fundus refers to the condition in which the retinal area of the fundus presents with a texture that resembles leopard skin.
In general, leopard prints are common in highly myopic and elderly people. Especially for high myopia, the higher the degree of myopia, the older the age, the higher the probability of appearing leopard fundus, and the more obvious the symptoms.
The reason for this is that the axis of the eye with high myopia is longer than that of a normal person, and the retina is thinner than that of a normal person. After the retina is thinned, the blood vessels on the retina become visible, making the fundus appear to appear as a texture resembling leopard.
In general, if only the simple leopard-shaped fundus is used, vision and other eye discomfort are not affected, and the eye disease is not considered to be in strict sense, the simple leopard-shaped fundus does not need special treatment, and blindness is not easily caused generally. However, leopard-like fundus can be seen as a sign before the occurrence of the disease, indicating the possibility of fundus lesions in the future. If the leopard-shaped eyeground develops into serious eyeground pathological changes such as retinal detachment, retinal hole, maculopathy and the like, the eyesight is obviously reduced, and even serious influences such as blindness and the like can be caused. Therefore, once the leopard-like fundus appears, it is necessary to periodically check the state of the fundus, and monitor the progress thereof.
However, ophthalmology is a fine medical branch, and specialized ophthalmologists are very deficient in our country and are difficult to deal with the general screening needs. Therefore, the automatic fundus image film reading technology becomes an important means for relevant fundus disease diagnosis and disease course development tracking.
When an ophthalmologist directly observes fundus images, the difference between the leopard print and other fundus areas is small when the severity of the leopard print is low, the naked eyes are difficult to rapidly distinguish the leopard print from the other fundus areas, the specific severity of the leopard print is difficult to rapidly determine when the severity of the leopard print is high, and a large amount of time and cost are required to be consumed. Moreover, leopard lines appear in different areas of the fundus and the degree of the leopard lines is different, the influence on vision is different, and the eye doctor can not easily and accurately screen the result. These problems are mainly due to the fact that the leopard fundus is difficult to precisely quantify, and thus, the continuous disease process monitoring needs are difficult to meet. Accordingly, there is a need for improvements over the prior art.
Disclosure of Invention
It is therefore an object of the present invention to overcome the above-described drawbacks of the prior art and to provide a fundus image analysis method, system and electronic apparatus.
The invention aims at realizing the following technical scheme:
according to a first aspect of the present invention, there is provided a fundus image analysis method comprising: acquiring a fundus image to be analyzed; performing pixel-level segmentation on the fundus image to obtain leopard print segmentation information identifying that each pixel belongs to a leopard print or non-leopard print area; based on fundus image segmentation, a plurality of interested areas with different influence degrees of leopard prints on vision are generated, and area segmentation information for marking the interested areas of each pixel is obtained; and carrying out quantitative analysis on the fundus leopard according to the leopard segmentation information and the region segmentation information.
In some embodiments of the invention, the plurality of regions of interest includes the optic disc region, the macular region, and other regions of the fundus.
In some embodiments of the present invention, the analyzing the condition of the fundus leopard based on the leopard print segmentation information and the region segmentation information includes: and calculating the local leopard print density of each region of interest according to the leopard print segmentation information and the region segmentation information.
In some embodiments of the present invention, the quantitatively analyzing the fundus leopard based on the leopard print segmentation information and the region segmentation information includes: and carrying out weighted summation on the corresponding local leopard print density according to the preset area weight coefficient of each region of interest to obtain the global weighted leopard print density.
In some embodiments of the present invention, the relative magnitudes of the region weight coefficients preset for the respective regions of interest are set as follows: the preset regional weight coefficient of the macular region is larger than or equal to the preset regional weight coefficient of the optic disc region, and the preset regional weight coefficient of the optic disc region is larger than the preset regional weight coefficients of other regions of the fundus.
In some embodiments of the present invention, the quantitatively analyzing the fundus leopard based on the leopard print segmentation information and the region segmentation information includes: and correcting the leopard print density corresponding to the fundus images acquired by the fundus cameras of the corresponding brands and/or the corresponding models according to a preset correction algorithm and correction parameters to obtain corrected leopard print density.
In some embodiments of the present invention, the quantitatively analyzing the fundus leopard based on the leopard print segmentation information and the region segmentation information includes: and carrying out leopard print grading on the fundus image according to the corresponding leopard print density or fundus image.
In some embodiments of the present invention, the trained fundus leopard print segmentation module performs pixel-level segmentation on the fundus image to obtain leopard print segmentation information identifying that each pixel belongs to a leopard print or non-leopard print region, which is trained in the following manner: acquiring a first training set comprising fundus images and leopard print segmentation labels, wherein each pixel is marked to belong to a leopard print or non-leopard print area; dividing the leopard or non-leopard region by utilizing a first training set training fundus leopard segmentation module to obtain a probability value of each pixel belonging to the leopard region, calculating first loss according to the probability value and the leopard segmentation label, and adjusting the weight parameter of the fundus leopard segmentation module based on the first loss.
In some embodiments of the present invention, a trained fundus region segmentation module segments a plurality of regions of interest having different degrees of influence on vision due to occurrence of leopard marks in a fundus based on fundus images, and obtains region segmentation information identifying regions of interest to which each pixel belongs, the region segmentation information including a plurality of fundus region segmentation sub-models, each fundus region segmentation sub-model corresponding to one region of interest to be segmented.
In some embodiments of the present invention, the fundus region segmentation sub-model for the optic disc region or the macular region is trained as follows: acquiring a second training set comprising fundus images and corresponding region-of-interest segmentation labels marking whether each pixel belongs to the region of interest; and training the fundus region segmentation sub-model by utilizing a second training set to segment the region of interest and the region of non-interest, obtaining a probability value of each pixel belonging to the region of interest, calculating a second loss according to the probability value and the region of interest segmentation label, and adjusting the weight parameters of the fundus region segmentation sub-model based on the second loss.
According to a second aspect of the present invention, there is provided a fundus leopard print analysis system comprising a fundus leopard print segmentation module for performing pixel level segmentation on a fundus image to obtain leopard print segmentation information identifying whether each pixel belongs to a leopard print or non-leopard print region; the fundus region segmentation module is used for segmenting various interested regions with different influence degrees of leopard prints on vision in fundus based on fundus images to obtain region segmentation information for identifying the interested region of each pixel; the fundus leopard print analysis module is used for carrying out quantitative analysis on fundus leopard print according to the leopard print segmentation information and the area segmentation information and outputting the local leopard print density, the global weighted leopard print density, the leopard print grading result and the combination thereof of each region of interest. Some detailed embodiments of the system may refer to embodiments of methods.
According to a third aspect of the present invention, there is provided an electronic device comprising: one or more processors; and a memory, wherein the memory is to store one or more executable instructions; the one or more processors are configured to implement the steps of the method of the first aspect via execution of the one or more executable instructions.
Compared with the prior art, the invention has the advantages that:
1. the invention provides a method for quantitatively analyzing leopard vein density in fundus pictures, which changes the past qualitative analysis of roughness into a quantifiable numerical index and provides possibility for more accurately tracking the development of the course of disease;
2. according to the distribution of large-scale crowd data, leopard prints are classified, and the non-visual numerical values are changed into more visual severity classification; meanwhile, the classification operation unifies the difference of leopard print segmentation results caused by imaging differences (such as chromatic aberration or imaging style differences and the like) among different fundus cameras, so that the results of the different fundus cameras can be uniformly compared;
3. the invention constructs an end-to-end leopard print recognition/leopard print density measurement depth neural network model, and can quickly and conveniently obtain the leopard print density analysis result of the input fundus image.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
fig. 1 is a block diagram of a fundus leopard print analysis system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of leopard print density correction according to an embodiment of the present invention;
fig. 3 is a schematic view of the leopard condition on fundus images of corresponding levels dividing the leopard scale into 10 levels according to an embodiment of the present invention;
fig. 4 is a schematic diagram of iteration of data annotation according to an embodiment of the present invention.
Detailed Description
For the purpose of making the technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by way of specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As mentioned in the background section, when an ophthalmologist directly observes fundus images, the difference between the leopard and other regions of the fundus is small when the severity of the leopard is low, and it is difficult for the naked eye to quickly distinguish the leopard from the other regions of the fundus, and when the severity of the leopard is high, it is difficult to quickly determine what is particularly severe, and a large amount of time and cost are required. Moreover, leopard lines appear in different areas of the fundus and the degree of the leopard lines is different, the influence on vision is different, and the eye doctor can not easily and accurately screen the result. In order to solve the technical problem, the invention utilizes the fundus leopard print segmentation module to carry out pixel-level segmentation on fundus images to obtain leopard print segmentation information which identifies that each pixel belongs to a leopard print or non-leopard print area; extracting a plurality of interested areas with different influence degrees of leopard print on vision in fundus based on fundus images by using fundus area segmentation module to obtain area segmentation information identifying the interested area of each pixel; the method comprises the steps that quantitative analysis is carried out on the eye fundus leopard vein more accurately according to leopard vein segmentation information and region segmentation information, so that a diagnosis result is given rapidly and accurately; the doctor does not have to spend much effort and time learning and analyzing how to determine the extent of leopard print from the fundus image.
Before describing embodiments of the present invention in detail, some of the terms used therein are explained as follows:
optic Disc (Optic Disc), which is referred to as the Optic Disc, also known as the disk. The retina has a pale red disk-like structure with a diameter of about 1.5mm from the macula to the nasal side, and is called optic nerve disk, abbreviated as optic disk.
The macula (corresponding to the region of the macula, also commonly referred to as the macular region) refers to the optically central region of the human eye, which is located in the center of the retina. The macular region is about 0.35cm on the temporal side of the optic disc and slightly below, is the projection point of the vision axis, the most sensitive region of vision.
The present embodiment provides a fundus image analysis method which can be executed by an electronic apparatus such as a computer or a server. The method carries out quantitative analysis on the leopard in the fundus image by means of a fundus leopard analysis system comprising a neural network, thereby realizing automatic detection and quantization of the leopard and providing basis for dynamically monitoring the development of the leopard. As shown in fig. 1, the fundus leopard print analysis system includes: fundus leopard print segmentation module 1, fundus region segmentation module 2 and fundus leopard print analysis module 3.
According to one embodiment of the present invention, the fundus leopard print segmentation module 1 is a neural network segmentation model (also referred to as a segmented neural network or segmentation model), and there are various alternative network structures, such as a U-network (U-net, U-net++, etc.), FCN, segNet, etc. The original U-net may also be modified, for example, in one embodiment, a segmentation model combining a VGG network with the U-net network is selected; the original U-net network comprises an encoding module and a decoding module, and the encoding module (a characteristic extraction part) can be replaced by a VGG network; and adaptively adjusts the decoding modules of the U-net to match the VGG network (e.g., adjusts the input and output profile sizes of the decoding modules) to form an improved U-net network. VGG networks have good feature extraction capabilities and are widely used in the task of feature extraction. The U-net network is also a fully verified classical network and is widely applied in the field of medical image segmentation. Therefore, the two are selected to be combined to be used as the segmentation model of the fundus leopard print in the embodiment, so that the precision of the leopard print segmentation can be improved. The invention constructs the end-to-end leopard print recognition deep neural network model, and can quickly and conveniently obtain the leopard print recognition result of the input fundus image.
According to one embodiment of the invention, the first training set is used when training the fundus leopard print segmentation module 1. Each sample in the first training set is a fundus image; the label of the sample is a leopard print segmentation label, wherein 0 or 1 is used for indicating that each pixel is a non-leopard print or a pixel corresponding to a leopard print region, for example, 0 indicates that the pixel belongs to the non-leopard print region, and 1 indicates that the pixel belongs to the leopard print region; in the training process, the fundus leopard print segmentation module 1 segments according to the input fundus image and outputs the probability that each pixel belongs to a leopard print region; the first loss is calculated based on the probability that each pixel belongs to the leopard print region and the leopard print segmentation label, and the model parameters of the fundus leopard print segmentation module 1 are adjusted according to the first loss, and the training aims at minimizing the first loss. Preferably, during training, the selected loss function is cross entropy loss, and the calculation mode is as follows:
wherein, the total number of pixel points is represented, y represents the leopard print segmentation label, i =1 indicates that the i-th pixel is a pixel of the leopard region, y i =0 indicates that the i-th pixel is a pixel of a non-leopard region, p (y i ) The probability that the i-th pixel is a pixel of the leopard region is represented.
Aiming at the training of the fundus leopard vein segmentation module 1 (a segmentation model combining a VGG network and a U-net network is selected), the applicant carries out corresponding experiments. The total annotation data used in this experiment was 12000 sheets, which contained four types of cameras, each type totaling 3000 sheets. 10000 are randomly selected as training sets and 2000 are selected as test sets according to camera types, and the number of the cameras of each type used for testing is 500. Using the average intersection ratio (Mean IOU) as an evaluation index for segmentation, the index of the model results in each type of camera in the test set is as follows:
| camera brands | Topcon | Reticam | Canon | Syseye |
| Average cross-over ratio | 61.22% | 63.41% | 62.05% | 64.39% |
Experimental data show that the fundus leopard print segmentation module 1 of the invention has small differences in segmentation results of cameras of different brands, but is applicable, and the effectiveness of a segmentation network is demonstrated.
According to one embodiment of the present invention, the fundus region dividing module 2 is configured to divide the fundus into a plurality of different regions as needed. When leopard marks occur in different areas of the fundus, the effect on vision is different, and the effect of leopard marks occurring in some areas may be more severe than the effect of leopard marks occurring in other areas. For example, the risk of damage to vision from the macula, the focus of the optic disc, is higher and other areas are relatively lower. Therefore, for the change of the fundus leopard shape, it is necessary to calculate the leopard print density of different regions separately for different regions of interest. For this reason, the present invention uses the fundus region division module 2 to divide a plurality of regions of interest in which the degree of influence of leopard marks on vision is different in the fundus based on the fundus image.
Different segmentation modes can be adopted according to the requirements aiming at the characteristics of different regions of interest. In one embodiment, the fundus region segmentation module 2 includes a plurality of fundus region segmentation sub-models, each of which may be used as needed for different regions of interest. For example, for the whole fundus region with simple structure, since the contrast with the background is obvious, the segmentation can be realized by adopting an image processing method, namely: the corresponding fundus region segmentation sub-model can be a classical image processing method, such as threshold segmentation and contour extraction, or can be segmented by adopting a neural network segmentation model. For the optic disc region and the macular region, because the boundaries thereof are relatively blurred, a neural network segmentation model may be used for segmentation. In this embodiment, if the corresponding fundus region segmentation sub-model adopts the neural network segmentation model, the network structure, training and recognition process are similar to those of the neural network segmentation model adopted for fundus leopard print segmentation, and reference may be made to the embodiment of the fundus leopard print segmentation module 1, which will not be described herein. It should be appreciated that the network structure of the different models need not be uniform and may be adjusted according to the specific requirements of the actual application, such as increasing/decreasing the number of network layers, the number of filters (filters), etc. For other regions of the fundus, it can be obtained by removing the optic disc region and the macular region from the fundus region. It should be noted that the other regions of the fundus, the optic disc region, and the macular region indicated in the present invention are only possible examples of fundus image regions, and in practical applications, may be a combination of several of them, for example, in one embodiment, the various regions of interest are the macular region and the other regions of the fundus; or, the various regions of interest are optic disc regions and other regions of the fundus; still alternatively, the plurality of regions of interest are a macular region and a optic disc region (fundus structural region); but may be more fundus areas or custom fundus areas than this, for example, adding a neighborhood of the macular area (e.g., an area within a first predetermined radius and outside the macular area centered about the center of the macular area) and/or a neighborhood of the optic disc area (e.g., an area within a second predetermined radius and outside the optic disc area centered about the center of the optic disc area) in the previous examples. The technical scheme of the embodiment at least can realize the following beneficial technical effects: the invention sets a plurality of fundus region segmentation sub-models, and uses the fundus region segmentation sub-models matched with the fundus region segmentation sub-models according to the characteristics of different regions of interest, thereby extracting the required region of interest more accurately and efficiently.
According to an embodiment of the present invention, the fundus leopard print analysis module 3 is a module that quantitatively analyzes fundus leopard print based on outputs of the fundus leopard print segmentation module 1 and the fundus region segmentation module 2. For example, it may be used to calculate the local leopard densities of the respective regions of interest from the leopard print segmentation information and the region segmentation information. The invention uses leopard print density as a quantitative evaluation index of the change degree of the fundus leopard print. Preferably, the leopard print density is defined as the ratio of the area of leopard print in the region of interest of the fundus of interest, and can be expressed by the following formula:
wherein B is leopard Representing leopard print segmentation information, B ROI Represents region division information, |b ROI The i indicates the number of pixels of the corresponding region of interest. That is, the leopard print density in the region of interest is: the ratio of the number of pixels marked as leopard print areas in the corresponding area of interest to the number of pixels in the area of interest. B in the above leopard Leopard print segmentation binary result recorded with each pixel, B ROI The leopard print recognition binary result of each pixel is recorded, namely the value of the leopard print recognition binary result is {0,1}. Using the above formula, the local leopard print density in a plurality of different fundus structures can be obtained, e.g. using oneThe local leopard print density vector D. The technical scheme of the embodiment at least can realize the following beneficial technical effects: calculating the local leopard print density of each interested region according to the leopard print segmentation information and the region segmentation information; thereby helping doctors to quickly know the leopard vein degree (leopard vein density level) of the corresponding region of the eyeground of the patient, and quickly and accurately giving diagnosis results; the doctor does not need to spend a great deal of effort and time to learn and analyze how to determine the leopard print degree according to the fundus images and the leopard print degree of the corresponding areas with different influence of the leopard print on the vision, so that the labor intensity of the doctor is reduced, the working efficiency and the accuracy are improved, and the quantitative cognitive result of the patient on the disease condition of the patient is facilitated, so that some patients attach importance to the eye disease condition of the patient and actively protect eyes.
According to an embodiment of the present invention, in order to facilitate evaluation of the leopard print density of the whole image, different area weight coefficients may be set for different areas of the local leopard print density vector D, and a scalar representing the leopard print density level of the whole fundus is obtained by the fundus leopard print analysis module 3 according to the preset area weight coefficients, namely, the globally weighted leopard print density D':
D′=w T D;
wherein w is a weight coefficient vector composed of weight coefficients of a plurality of regions of interest, T is a transpose, and D is a local leopard print density vector. Preferably, the method further comprises: all the regional weight coefficients are set and adjusted in a self-defined mode. The technical scheme of the embodiment at least can realize the following beneficial technical effects: the overall weighted leopard print density can be combined with different weight coefficients of areas with different degrees of influence of the leopard print on the vision to quickly calculate the comprehensive leopard print density, so that influence of the leopard print of areas with relatively low influence on the vision on the overall leopard print density is reduced, and corresponding medical staff can intuitively know the overall leopard print condition of a patient according to the overall weighted leopard print density, so that diagnosis results can be made more accurately and efficiently. In other words, the method provided by the invention changes the past rough qualitative analysis into a quantifiable numerical index, and provides possibility for more accurately tracking the progress of the disease.
The above embodiment can be applied to implementation equipment with a fixed camera model, for example, fundus analysis equipment with a certain fixed camera model (for example, fundus leopard analysis equipment integrated with a certain camera) is adopted, only images collected by a single equipment are oriented, the problem of imaging difference does not exist, and the leopard print level can be directly determined according to the obtained leopard print density. In other embodiments, the present invention may be applied to a web product that provides a fundus leopard print analysis service to the outside, for example, an analysis website in which fundus images uploaded to the analysis website by a consignor (e.g., a hospital, an ophthalmic test unit, an ophthalmic clinic, an individual, or the like) are received and analyzed. However, in this implementation scenario, since fundus images uploaded by different commissioners may be acquired by fundus cameras of different brands and models, and there is a certain imaging difference (such as chromatic aberration and imaging style difference), a certain distribution difference may be generated when the leopard print density is evaluated, and direct comparison cannot be performed, which may affect the accuracy of the leopard print density, and therefore correction is required. The leopard print density correction may be for local leopard print densities and/or globally weighted leopard print densities.
According to one embodiment of the invention, the fundus camera is corrected according to the brand of the fundus camera, and the products of the same brand, regardless of the model, use uniform correction parameters, so that the correction flow can be simplified. Preferably, the method comprises: acquiring a plurality of fundus images acquired by fundus cameras of various models of corresponding brands, respectively analyzing leopard print densities (local leopard print densities and/or globally weighted leopard print densities) based on the fundus images to obtain a plurality of leopard print densities for correction, and taking the mean value and variance of the leopard print densities for correction as correction parameters corresponding to the brand cameras. Preferably, in the correction, the correction algorithm is set to leopard print density minus the mean value divided by the variance. Referring to fig. 2, which shows a schematic diagram of the distribution of leopard print density before (fig. 2 a) and after (fig. 2 b) correction according to four camera brands Topcon, reticam, canon and sysey, the horizontal axis represents leopard print density values, the vertical axis represents relative values corresponding to the number of samples, and it can be seen from fig. 2a that the distribution of the different cameras before correction is slightly different and the peaks are misaligned; it can be seen from fig. 2b that after correction, the peaks are aligned, so that the leopard print level of fundus images acquired by different cameras can be better and accurately analyzed. It should be understood that the correction parameters for the local leopard print density and the globally weighted leopard print density are different, the mean and variance for the local leopard print density is calculated based on the corrected local leopard print density, and the mean and variance for the globally weighted leopard print density is calculated using the corrected globally weighted leopard print density.
According to one embodiment of the present invention, correction is performed by the brand and model of the fundus camera, so that the leopard print density calculated from the fundus image acquired by the fundus camera of a specific model can be corrected more accurately. Preferably, the method comprises: and acquiring a plurality of fundus images acquired by fundus cameras of corresponding brands and corresponding models, respectively analyzing leopard print densities (local leopard print densities and/or globally weighted leopard print densities) based on the fundus images to obtain a plurality of leopard print densities for correction, and taking the mean value and the variance of the leopard print densities for correction as correction parameters corresponding to the fundus cameras of the brands and the models. In the correction, the correction algorithm adopted is set as leopard print density minus the mean value divided by the variance. In addition, all fundus images acquired by fundus cameras of various models based on the brand can be used for carrying out leopard print density analysis respectively to obtain a plurality of leopard print densities for correction, and the mean value and variance of the plurality of leopard print densities for correction are used as correction parameters corresponding to the unknown model cameras of the brand; if the camera brand is known but the model is difficult to know or not included, the correction parameters corresponding to the unknown model of the brand may be substituted. According to the embodiment of the invention, the distribution of different cameras on the leopard print density is obtained by carrying out statistical analysis on large-scale samples of different camera photos, the number of the samples is large enough, the distribution of the leopard print in the crowd is assumed to be consistent, the distribution of the different cameras is modeled by a statistical method, and the result of the leopard print density is corrected to the same distribution by a mathematical method, so that fundus images shot by different cameras can be directly compared, and the fundus screening results of different cameras are conveniently compared, so that the influence of imaging differences of different cameras on the leopard print analysis is reduced.
The leopard print density can enable medical staff to intuitively understand the leopard print level of the eyeground of a patient, but is still not intuitive enough for non-professional staff (such as the patient or family members of the patient) even for certain medical staff, so that on the basis of the leopard print density quantitative analysis, a plurality of grades for representing different severity degrees of the leopard print can be determined according to all collected sample data according to one embodiment of the invention, and different leopard print densities are subjected to grading modeling, so that a leopard print grading result is obtained. Different modeling modes can be adopted according to different task requirements.
According to one embodiment of the present invention, leopard classification can be performed based on globally weighted leopard densities if a fixed camera model fundus analysis device is integrated. For example, a plurality of leopard print density threshold value intervals are preset, and each interval corresponds to one level. And if the global weighted leopard print density is in the corresponding interval, outputting the leopard print level corresponding to the interval. Similarly, if a web product providing fundus leopard analysis services, leopard grading may be performed based on corrected globally weighted leopard densities, with reference to the corresponding embodiments of fundus analysis devices.
According to one embodiment of the present invention, if a fundus analysis device of a fixed camera model is integrated, the classification can be made based on the local leopard print density. For example, a leopard print classification model is trained in advance, the local leopard print density corresponding to the fundus image is used as an input, the leopard print level is used as a label, and the leopard print classification model is trained to classify the leopard print density of the fundus image. After training, inputting the local leopard print density of the patient to be detected into a leopard print classification model, and obtaining corresponding leopard print classification. Similarly, grading based on corrected local leopard print density, global leopard print density, or corrected global leopard print density is also possible. For example, if a network product providing fundus leopard analysis services, leopard grading may be performed based on corrected local leopard densities, referring to the corresponding embodiments of the fundus analysis device.
According to another embodiment of the present invention, the classification may also be performed based on fundus images. For example, a leopard print classification model is trained in advance, the fundus image is used as an input, the leopard print level is used as a label, and the leopard print classification model is trained to classify the leopard print density of the fundus image. After training, inputting the fundus image to be detected into a leopard print classification model, and obtaining corresponding leopard print classification. According to the invention, leopard prints can be classified according to the distribution of large-scale crowd data samples, and the non-intuitive numerical values are changed into more intuitive severity classification. Meanwhile, the classification operation unifies the difference of leopard print segmentation results caused by the imaging style difference between different fundus cameras, so that the results of different fundus cameras can be uniformly compared.
In one embodiment, the severity of leopard print (leopard print level) may be classified into 10 levels, and the different severity levels on the fundus image may be represented by referring to the schematic diagrams corresponding to the 1-10 levels of leopard print shown in fig. 3, wherein 10 indicates the highest severity level, 1 indicates the lowest severity level, in the schematic diagrams corresponding to each level, the left side is the original image, and the right side is the image after highlighting the leopard print portion in the corresponding original image, so as to more clearly observe the fundus print level.
In the training process of the leopard print classification model, if all sample data are directly marked manually, the time cost is high. According to one embodiment of the present invention, there is provided a method of assisting in adding a tag, the method comprising: training the leopard print classification model by using a preset number of data sets containing manual labeling labels, labeling unlabeled samples by using the trained leopard print classification model to obtain machine labeling labels, obtaining corrected samples after manually correcting the machine labeling labels, adding the corrected samples into the data sets, and training the leopard print classification model. In the data marking process, the method adopts an iterative modeling mode, utilizes a mode of combining machine generation marking with manual adjustment review, rapidly marks a large number of fundus picture samples, saves marking cost, improves marking efficiency and accelerates project implementation. For example, referring to fig. 4, the present invention first adds a label identifying the grade by manually labeling a portion of a small number of data samples (corresponding to unlabeled samples), such as 100 fundus images. Then, training the designed leopard print grading model by using the marked eye bottom image (corresponding to model training); and inputting unlabeled picture samples by using the leopard print classification model trained at the current stage to obtain a rough labeling label (corresponding to the generation of a labeling result by using the current model). On the basis of the rough labeling labels, the incorrectly labeled parts are modified (corresponding to the manually modified labeling results) by using a manual checksum modification method, so that the labeling quality is ensured. Therefore, a new labeling sample can be obtained and added into the labeling data set, and then the process of model training and manual labeling verification is iterated, so that a large number of labeling samples can be obtained quickly until the model meets the preset index or the number of data samples meets the model training requirement. The method can dynamically observe the improvement of the model effect while improving the labeling efficiency, and can label only the minimum required labeling sample number on the premise of meeting the target index.
It should be noted that, although the steps are described above in a specific order, it is not meant to necessarily be performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order, as long as the required functions are achieved.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A fundus image analysis method, comprising:
acquiring a fundus image to be analyzed;
performing pixel-level segmentation on the fundus image to obtain leopard print segmentation information for identifying whether each pixel belongs to a leopard print or a non-leopard print region;
based on the fundus image segmentation, a plurality of interested areas with different influence degrees of leopard veins on vision are generated, and area segmentation information for identifying the interested areas of each pixel is obtained, wherein the plurality of interested areas comprise a optic disc area, a macula area and other fundus areas;
the fundus leopard print is quantitatively analyzed according to the leopard print segmentation information and the region segmentation information, and the method comprises the following steps:
calculating local leopard print density of each region of interest in the multiple regions of interest according to the leopard print segmentation information and the region segmentation information; and
and carrying out weighted summation on the corresponding local leopard print density according to the region weight coefficient preset in each region of interest in the plurality of regions of interest to obtain the global weighted leopard print density.
2. The method according to claim 1, wherein the relative magnitudes of the region weight coefficients preset for the respective regions of interest are set in the following manner:
the preset regional weight coefficient of the macular region is larger than or equal to the preset regional weight coefficient of the optic disc region, and the preset regional weight coefficient of the optic disc region is larger than the preset regional weight coefficients of other regions of the fundus.
3. The method according to any one of claims 1 to 2, wherein the quantitatively analyzing the fundus leopard based on the leopard segmentation information and the region segmentation information comprises:
and correcting the leopard print density corresponding to the fundus images acquired by the fundus cameras of the corresponding brands and/or the corresponding models according to a preset correction algorithm and correction parameters to obtain corrected leopard print density.
4. A method according to claim 3, wherein the quantitatively analyzing the fundus leopard based on the leopard segmentation information and the region segmentation information comprises:
and carrying out leopard print grading on the fundus image according to the corresponding leopard print density or fundus image.
5. The method according to any one of claims 1 to 2, wherein the trained fundus image is subjected to pixel-level segmentation by a trained fundus leopard segmentation module to obtain leopard segmentation information identifying whether each pixel belongs to a leopard or non-leopard region, which is trained in the following manner:
acquiring a first training set comprising fundus images and leopard print segmentation labels, wherein each pixel is marked to belong to a leopard print or non-leopard print area;
dividing the leopard or non-leopard region by using a first training set training fundus leopard segmentation module to obtain a probability value of each pixel belonging to the leopard region, calculating a first loss according to the probability value and the leopard segmentation label,
and adjusting the weight parameter of the fundus leopard print segmentation module based on the first loss.
6. The method according to any one of claims 1 to 2, wherein the trained fundus region segmentation module obtains region segmentation information identifying regions of interest to which each pixel belongs based on a plurality of regions of interest having different degrees of influence on vision due to occurrence of leopard marks in the fundus image, the region segmentation information including a plurality of fundus region segmentation sub-models each corresponding to one region of interest to be segmented.
7. The method according to any one of claims 6, wherein the fundus region segmentation sub-model for the optic disc region or the macular region is trained as follows:
acquiring a second training set comprising fundus images and corresponding region-of-interest segmentation labels marking whether each pixel belongs to the region of interest;
dividing the region of interest and the region other than the region of interest by using a second training set training fundus region segmentation sub-model to obtain a probability value of each pixel belonging to the region of interest, calculating a second loss according to the probability value and the region of interest segmentation label,
and adjusting the weight parameters of the fundus region segmentation sub-model based on the second loss.
8. A fundus leopard analysis system for implementing the method according to claims 1 to 7, comprising:
the fundus leopard print segmentation module is used for carrying out pixel level segmentation on fundus images to obtain leopard print segmentation information for identifying whether each pixel belongs to a leopard print or a non-leopard print area;
the fundus region segmentation module is used for obtaining region segmentation information for identifying the region of interest to which each pixel belongs based on various regions of interest with different influence degrees of leopard print on vision in fundus images;
the fundus leopard print analysis module is used for carrying out quantitative analysis on fundus leopard print according to the leopard print segmentation information and the region segmentation information and outputting the local leopard print density, the global weighted leopard print density and the leopard print grading result or the combination thereof of each region of interest.
9. A computer readable storage medium having embodied thereon a computer program executable by a processor to perform the steps of the method of any of claims 1 to 7.
10. An electronic device, comprising:
one or more processors; and
a memory, wherein the memory is to store one or more executable instructions;
the one or more processors are configured to implement the steps of the method of any one of claims 1 to 7 via execution of the one or more executable instructions.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111073695.2A CN113768461B (en) | 2021-09-14 | 2021-09-14 | Fundus image analysis method, fundus image analysis system and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111073695.2A CN113768461B (en) | 2021-09-14 | 2021-09-14 | Fundus image analysis method, fundus image analysis system and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113768461A CN113768461A (en) | 2021-12-10 |
| CN113768461B true CN113768461B (en) | 2024-03-22 |
Family
ID=78843458
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111073695.2A Active CN113768461B (en) | 2021-09-14 | 2021-09-14 | Fundus image analysis method, fundus image analysis system and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113768461B (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115035103B (en) * | 2022-08-01 | 2025-06-13 | 执鼎医疗科技(杭州)有限公司 | Red eye image data processing method and red eye level analysis device |
| CN115082459A (en) * | 2022-08-18 | 2022-09-20 | 北京鹰瞳科技发展股份有限公司 | Method for training detection model for diopter detection and related product |
| CN116503405B (en) * | 2023-06-28 | 2023-10-13 | 依未科技(北京)有限公司 | Myopia fundus change visualization method and device, storage medium and electronic equipment |
| CN116491893B (en) * | 2023-06-28 | 2023-09-15 | 依未科技(北京)有限公司 | Method and device for evaluating change of ocular fundus of high myopia, electronic equipment and storage medium |
| CN116491892B (en) * | 2023-06-28 | 2023-09-22 | 依未科技(北京)有限公司 | Myopia fundus change assessment method and device and electronic equipment |
| CN117437231B (en) * | 2023-12-21 | 2024-04-26 | 依未科技(北京)有限公司 | Positioning method and device for myopia fundus structure change and image processing method |
Citations (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2008073188A (en) * | 2006-09-21 | 2008-04-03 | Gifu Univ | Image analysis system and image analysis program |
| WO2010134889A1 (en) * | 2009-05-19 | 2010-11-25 | Singapore Health Services Pte Ltd | Methods and systems for pathological myopia detection |
| US8879813B1 (en) * | 2013-10-22 | 2014-11-04 | Eyenuk, Inc. | Systems and methods for automated interest region detection in retinal images |
| CN105310645A (en) * | 2014-06-18 | 2016-02-10 | 佳能株式会社 | Image processing apparatus and image processing method |
| JP2017087055A (en) * | 2017-02-27 | 2017-05-25 | 国立大学法人東北大学 | Ophthalmological analysis device |
| WO2018116321A2 (en) * | 2016-12-21 | 2018-06-28 | Braviithi Technologies Private Limited | Retinal fundus image processing method |
| CN109166117A (en) * | 2018-08-31 | 2019-01-08 | 福州依影健康科技有限公司 | A kind of eye fundus image automatically analyzes comparison method and a kind of storage equipment |
| CN110009627A (en) * | 2019-04-11 | 2019-07-12 | 北京百度网讯科技有限公司 | Method and apparatus for processing images |
| CN110120047A (en) * | 2019-04-04 | 2019-08-13 | 平安科技(深圳)有限公司 | Image Segmentation Model training method, image partition method, device, equipment and medium |
| CN110163839A (en) * | 2019-04-02 | 2019-08-23 | 上海鹰瞳医疗科技有限公司 | Leopard pattern fundus image recognition method, model training method and equipment |
| CN110570421A (en) * | 2019-09-18 | 2019-12-13 | 上海鹰瞳医疗科技有限公司 | Multi-task fundus image classification method and device |
| CN110599480A (en) * | 2019-09-18 | 2019-12-20 | 上海鹰瞳医疗科技有限公司 | Multi-source input fundus image classification method and device |
| CN111583261A (en) * | 2020-06-19 | 2020-08-25 | 林晨 | Fundus super-wide-angle image analysis method and terminal |
| CN111709966A (en) * | 2020-06-23 | 2020-09-25 | 上海鹰瞳医疗科技有限公司 | Fundus image segmentation model training method and equipment |
| CN112734773A (en) * | 2021-01-28 | 2021-04-30 | 依未科技(北京)有限公司 | Sub-pixel-level fundus blood vessel segmentation method, device, medium and equipment |
| CN112957005A (en) * | 2021-02-01 | 2021-06-15 | 山西省眼科医院(山西省红十字防盲流动眼科医院、山西省眼科研究所) | Automatic identification and laser photocoagulation region recommendation algorithm for fundus contrast image non-perfusion region |
| CN113243887A (en) * | 2021-07-16 | 2021-08-13 | 中山大学中山眼科中心 | Intelligent diagnosis and treatment instrument for macular degeneration of old people |
| CN113344894A (en) * | 2021-06-23 | 2021-09-03 | 依未科技(北京)有限公司 | Method and device for extracting characteristics of eyeground leopard streak spots and determining characteristic index |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7500751B2 (en) * | 2004-03-12 | 2009-03-10 | Yokohama Tlo Company Ltd. | Ocular fundus portion analyzer and ocular fundus portion analyzing method |
| US20180064335A1 (en) * | 2014-11-18 | 2018-03-08 | Elwha Llc | Retinal imager device and system with edge processing |
| WO2020079704A1 (en) * | 2018-10-16 | 2020-04-23 | Sigtuple Technologies Private Limited | Method and system for performing semantic segmentation of plurality of entities in an image |
| CN109784337B (en) * | 2019-03-05 | 2022-02-22 | 北京康夫子健康技术有限公司 | Method and device for identifying yellow spot area and computer readable storage medium |
| CN109829446A (en) * | 2019-03-06 | 2019-05-31 | 百度在线网络技术(北京)有限公司 | Eye fundus image recognition methods, device, electronic equipment and storage medium |
| CN110443812B (en) * | 2019-07-26 | 2022-07-01 | 北京百度网讯科技有限公司 | Fundus image segmentation method, device, equipment and medium |
| CN112949585B (en) * | 2021-03-30 | 2025-02-18 | 耳纹元智能科技(广东)有限公司 | Method, device, electronic device and storage medium for identifying blood vessels in fundus images |
-
2021
- 2021-09-14 CN CN202111073695.2A patent/CN113768461B/en active Active
Patent Citations (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2008073188A (en) * | 2006-09-21 | 2008-04-03 | Gifu Univ | Image analysis system and image analysis program |
| WO2010134889A1 (en) * | 2009-05-19 | 2010-11-25 | Singapore Health Services Pte Ltd | Methods and systems for pathological myopia detection |
| US8879813B1 (en) * | 2013-10-22 | 2014-11-04 | Eyenuk, Inc. | Systems and methods for automated interest region detection in retinal images |
| CN105310645A (en) * | 2014-06-18 | 2016-02-10 | 佳能株式会社 | Image processing apparatus and image processing method |
| WO2018116321A2 (en) * | 2016-12-21 | 2018-06-28 | Braviithi Technologies Private Limited | Retinal fundus image processing method |
| JP2017087055A (en) * | 2017-02-27 | 2017-05-25 | 国立大学法人東北大学 | Ophthalmological analysis device |
| CN109166117A (en) * | 2018-08-31 | 2019-01-08 | 福州依影健康科技有限公司 | A kind of eye fundus image automatically analyzes comparison method and a kind of storage equipment |
| CN110163839A (en) * | 2019-04-02 | 2019-08-23 | 上海鹰瞳医疗科技有限公司 | Leopard pattern fundus image recognition method, model training method and equipment |
| CN110120047A (en) * | 2019-04-04 | 2019-08-13 | 平安科技(深圳)有限公司 | Image Segmentation Model training method, image partition method, device, equipment and medium |
| CN110009627A (en) * | 2019-04-11 | 2019-07-12 | 北京百度网讯科技有限公司 | Method and apparatus for processing images |
| CN110570421A (en) * | 2019-09-18 | 2019-12-13 | 上海鹰瞳医疗科技有限公司 | Multi-task fundus image classification method and device |
| CN110599480A (en) * | 2019-09-18 | 2019-12-20 | 上海鹰瞳医疗科技有限公司 | Multi-source input fundus image classification method and device |
| CN111583261A (en) * | 2020-06-19 | 2020-08-25 | 林晨 | Fundus super-wide-angle image analysis method and terminal |
| CN111709966A (en) * | 2020-06-23 | 2020-09-25 | 上海鹰瞳医疗科技有限公司 | Fundus image segmentation model training method and equipment |
| CN112734773A (en) * | 2021-01-28 | 2021-04-30 | 依未科技(北京)有限公司 | Sub-pixel-level fundus blood vessel segmentation method, device, medium and equipment |
| CN112957005A (en) * | 2021-02-01 | 2021-06-15 | 山西省眼科医院(山西省红十字防盲流动眼科医院、山西省眼科研究所) | Automatic identification and laser photocoagulation region recommendation algorithm for fundus contrast image non-perfusion region |
| CN113344894A (en) * | 2021-06-23 | 2021-09-03 | 依未科技(北京)有限公司 | Method and device for extracting characteristics of eyeground leopard streak spots and determining characteristic index |
| CN113243887A (en) * | 2021-07-16 | 2021-08-13 | 中山大学中山眼科中心 | Intelligent diagnosis and treatment instrument for macular degeneration of old people |
Non-Patent Citations (1)
| Title |
|---|
| 基于深度神经网络的视网膜病变检测方法研究;刘磊;中国博士学位论文全文数据库(医药卫生科技辑)(2019/08);E073-13 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113768461A (en) | 2021-12-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113768461B (en) | Fundus image analysis method, fundus image analysis system and electronic equipment | |
| Neto et al. | An unsupervised coarse-to-fine algorithm for blood vessel segmentation in fundus images | |
| Sangeethaa et al. | An intelligent model for blood vessel segmentation in diagnosing DR using CNN | |
| Chaum et al. | Automated diagnosis of retinopathy by content-based image retrieval | |
| CN106530295A (en) | Fundus image classification method and device of retinopathy | |
| Garcia et al. | Detection of hard exudates in retinal images using a radial basis function classifier | |
| Phridviraj et al. | A bi-directional Long Short-Term Memory-based Diabetic Retinopathy detection model using retinal fundus images | |
| Alais et al. | Fast macula detection and application to retinal image quality assessment | |
| CN114694236A (en) | An Eye Movement Segmentation and Localization Method Based on Recurrent Residual Convolutional Neural Network | |
| Shaik et al. | Glaucoma identification based on segmentation and fusion techniques | |
| Giancardo | Automated fundus images analysis techniques to screen retinal diseases in diabetic patients | |
| Giancardo et al. | Quality assessment of retinal fundus images using elliptical local vessel density | |
| Rani et al. | Classification of retinopathy of prematurity using back propagation neural network | |
| Datta et al. | An integrated fundus image segmentation algorithm for multiple eye ailments | |
| Kumari et al. | Automated process for retinal image segmentation and classification via deep learning based cnn model | |
| WO2020016836A1 (en) | System and method for managing the quality of an image | |
| Akbar et al. | A Novel Filtered Segmentation-Based Bayesian Deep Neural Network Framework on Large Diabetic Retinopathy Databases. | |
| Hussein et al. | Convolutional Neural Network in Classifying Three Stages of Age-Related Macula Degeneration | |
| Araque-Gallardo et al. | Analysis of Pre-trained Convolutional Neural Network Models in Diabetic Macular Edema Detection Through Retinal Fundus Images | |
| Velázquez-González et al. | Detection and classification of non-proliferative diabetic retinopathy using a back-propagation neural network | |
| Azeroual et al. | Convolutional Neural Network for Segmentation and Classification of Glaucoma. | |
| Kumar et al. | Ocular Disease Identification and Classification Using LBP-KNN | |
| Basheer et al. | Estimation of diabetic retinopathy using deep learning | |
| Baba et al. | Retinal Disease Classification Using Custom CNN Model From OCT Images | |
| Aknan et al. | A Diabetic Retinopathy Classification and Analysis Towards the Development of a Computer-Aided Medical Decision Support System |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |