CN105913075A - Endoscopic image focus identification method based on pulse coupling nerve network - Google Patents
Endoscopic image focus identification method based on pulse coupling nerve network Download PDFInfo
- Publication number
- CN105913075A CN105913075A CN201610207950.0A CN201610207950A CN105913075A CN 105913075 A CN105913075 A CN 105913075A CN 201610207950 A CN201610207950 A CN 201610207950A CN 105913075 A CN105913075 A CN 105913075A
- Authority
- CN
- China
- Prior art keywords
- input
- image
- focus
- vector
- competition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2111—Selection of the most significant subset of features by using evolutionary computational techniques, e.g. genetic algorithms
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Physiology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及医学图像模式识别分析领域,特别涉及一种人体胃肠道内窥检测系统的病灶图像模式识别方法。The invention relates to the field of medical image pattern recognition and analysis, in particular to a lesion image pattern recognition method of a human gastrointestinal endoscopy detection system.
背景技术Background technique
消化道疾病越来越严重地威胁着人类的健康,同时其他许多种类的疾病都可能由消化道系统的疾病直接或间接导致,消化道疾病的检查和诊断对人类的健康状况有着非常重要的意义。检测消化道疾病最好的方法就是直接观测胃肠道,所以内窥镜是比较直接有效的方法。然而传统的插入式内窥镜如肠镜、胃镜等,由于机械插入的原因无法深入肠道,使小肠部分成为检测盲区,同时插入式内窥镜使用不方便,会给病人带来疼痛,而且有肠穿孔的危险。随着半导体技术、传感技术、LED照明技术、无线通信与微控制技术的发展,为无线胶囊内窥镜的出现和普及奠定了基础。无线胶囊内窥镜由微型图像传感器、照明模块、无线发射模块、电源管理模块等组成。病人吞服后在人体胃肠道蠕动作用下胶囊内窥镜顺着消化肠道向下运动。在运动过程中,胶囊前端的玻璃罩撑开肠道并紧贴肠壁,照明模块照亮视场内的肠壁,同时图像传感器通过短焦距镜头获得肠道内壁的图像,并将图像数据发射出体外。胶囊内窥镜持续地将胃肠道图像传出体外,直到由肛门被自然排出人体。整个过程无需人工干预,不会为病人带来任何疼痛与不便,而且不存在检测盲区,实现了无痛无创全消化道检测。正是由于这些优点胶囊内窥镜作为一种新型的消化道检测技术越来越多地在临床中得到应用。Gastrointestinal diseases are increasingly threatening human health. At the same time, many other diseases may be directly or indirectly caused by diseases of the gastrointestinal system. The examination and diagnosis of gastrointestinal diseases are of great significance to human health. . The best way to detect gastrointestinal diseases is to directly observe the gastrointestinal tract, so endoscopy is a more direct and effective method. However, traditional insertion endoscopes, such as colonoscopes and gastroscopes, cannot go deep into the intestinal tract due to mechanical insertion, making the small intestine a blind spot for detection. There is a risk of intestinal perforation. With the development of semiconductor technology, sensor technology, LED lighting technology, wireless communication and micro-control technology, the foundation has been laid for the emergence and popularization of wireless capsule endoscope. The wireless capsule endoscope is composed of a miniature image sensor, an illumination module, a wireless transmission module, and a power management module. After the patient swallows it, the capsule endoscope moves down the digestive tract under the peristalsis of the human gastrointestinal tract. During the movement, the glass cover at the front of the capsule stretches the intestinal tract and clings to the intestinal wall, the lighting module illuminates the intestinal wall in the field of view, and the image sensor obtains the image of the intestinal wall through the short focal length lens, and transmits the image data out of the body. The capsule endoscope continuously transmits images of the gastrointestinal tract out of the body until it is naturally excreted from the human body through the anus. The whole process does not require manual intervention, will not bring any pain and inconvenience to the patient, and there is no detection blind spot, which realizes painless and non-invasive detection of the whole digestive tract. It is precisely because of these advantages that capsule endoscopy, as a new type of digestive tract detection technology, is increasingly used in clinical practice.
胶囊内窥镜在人体内的工作时间大约为8小时,患有胃肠道疾病的人代谢时间会更长,所以一次检测将产生至少2×3600×8=57600帧图像。在如此巨大数量的视频图像中寻找病灶或病理特征是一件非常耗时耗力的工作,即使是经验丰富的专家也至少要花费2小时的时间。这不仅浪费时间,而且由于视觉疲劳会出现漏检的情况。所以利用图像处理与模式识别技术实现计算机智能出血图象识别是一个必然的趋势。由于内窥图像为人体消化道图像,情况非常复杂,病灶特点也多变,采用常用的数字图像处理与模式识别算法很难应付复杂的内窥图像和多变的病灶。脉冲耦合神经网络来自于哺乳动物视觉神经的工作机理,相对于传统的BP、RBF等神经网络模型,该模型在图像处理领域有着与生俱来的先天优势,并已在部分应用中表现出了优势,在病灶智能识别中具有巨大应用潜力。The working time of the capsule endoscope in the human body is about 8 hours, and the metabolism time of people with gastrointestinal diseases will be longer, so a detection will generate at least 2×3600×8=57600 frame images. Finding lesions or pathological features in such a huge number of video images is a very time-consuming and labor-intensive task, even for experienced experts, it will take at least 2 hours. This not only wastes time, but also leads to missed inspections due to visual fatigue. Therefore, it is an inevitable trend to use image processing and pattern recognition technology to realize computer intelligent bleeding image recognition. Since the endoscopic image is an image of the human digestive tract, the situation is very complicated, and the characteristics of the lesions are also changeable. It is difficult to deal with complex endoscopic images and changeable lesions by using commonly used digital image processing and pattern recognition algorithms. The pulse-coupled neural network comes from the working mechanism of the mammalian visual nerve. Compared with the traditional neural network models such as BP and RBF, this model has inherent advantages in the field of image processing, and has shown outstanding performance in some applications. It has great application potential in the intelligent identification of lesions.
现有的对于人体胃肠道内窥检测技术都集中在胶囊内窥镜检测本体上,而病灶图像的处理与模式识别研究相对滞后,已成为胶囊内窥镜检测系统的制约瓶颈。而且图像病灶识别技术都集中在具体病灶的模式识上,但由于病灶的多变性,即使是同一种类的病灶其特征也非常多变。而且常规的数字图像处理和模式识别算法也难以应对内容复杂的内窥图像,导致识别方法特异度和灵敏度不高。Existing endoscopic detection technologies for the human gastrointestinal tract are all focused on the detection body of the capsule endoscope, while the research on lesion image processing and pattern recognition is relatively lagging behind, which has become the bottleneck of the capsule endoscopic detection system. Moreover, image lesion recognition technologies all focus on the pattern recognition of specific lesions, but due to the variability of lesions, even the same type of lesions have very variable characteristics. Moreover, conventional digital image processing and pattern recognition algorithms are also difficult to deal with complex endoscopic images, resulting in low specificity and sensitivity of the recognition method.
发明内容Contents of the invention
为了克服现有病灶模式难以识别提取的难题,本发明基于脉冲耦合神经网络与视觉注意机制的定位方法,将不区分病灶的具体类型,提供了一种基于脉冲耦合神经网络的内窥图像中病灶识别方法,该方法可准确地将内窥图像分类为正常模式和病灶模式,并将识别的内窥图像病灶区域进行标记供临床医生做进一步的判断,降低临床医生的工作量。In order to overcome the difficulty of identifying and extracting existing lesion patterns, the present invention provides a positioning method based on pulse-coupled neural network and visual attention mechanism, which does not distinguish the specific type of lesion, and provides a pulse-coupled neural network-based lesion detection method in endoscopic images. The identification method can accurately classify the endoscopic image into normal mode and lesion mode, and mark the identified lesion area of the endoscopic image for clinicians to make further judgments, reducing the workload of clinicians.
为了解决上述技术问题提供的技术方案为:The technical scheme that provides in order to solve the above-mentioned technical problem is:
一种基于脉冲耦合神经网络的内窥图像中病灶识别方法,所述识别方法包括如下步骤:A method for identifying a lesion in an endoscopic image based on a pulse-coupled neural network, the identification method comprising the steps of:
a.视频分幅a. Video framing
将内窥检测视频文件输入,分幅得到位图格式的单幅内窥图像;Input the endoscopic inspection video file, and obtain a single endoscopic image in bitmap format by framing;
b.图像预处理b. Image preprocessing
将步骤a所得到的位图图像按内窥镜的视野参数将图像的边沿黑边平滑处理,得到边界清晰的内窥图像,然后采用高通滤波器(例如巴特沃夫高通滤波器)滤波去噪,再采用中值滤波器进行滤波增强,去掉待处理图像区域的噪声并保留图像高频部分;The bitmap image obtained in step a is processed according to the field of view parameter of the endoscope to smooth the edge of the image to obtain an endoscopic image with clear boundaries, and then use a high-pass filter (such as a Butterworth high-pass filter) to filter and denoise , and then use the median filter to filter and enhance, remove the noise in the image area to be processed and retain the high frequency part of the image;
c.面向视觉感知的颜色空间转换c. Color space conversion for visual perception
步骤b中得到的位图图像为面向设备的RGB颜色空间,将其转换到面向视觉感知的Luv颜色空间;The bitmap image obtained in step b is a device-oriented RGB color space, which is converted to a visual perception-oriented Luv color space;
d.疑似病灶区域定位d. Localization of suspected lesions
以步骤c得到的Luv颜色空间图像的u、v分量作为输入,计算颜色特征显著图uv(c,s),以L分量作为输入计算亮度特征显著图L(c,s),然后使用拉普拉斯变换算法和虚连方法,得到图像中显著内容的边沿区域,计算轮廓特征显著图O(c,s)和纹理特征图显著T(c,s),将所得到的颜色特征显著图uv(c,s),亮度特征显著图L(c,s),轮廓特征显著图O(c,s)和纹理特征显著图T(c,s)分别在多尺度下进行规则化运算并融合,得到图像的显著度图S,然后采用蚀刻算法过滤掉面积较小的显著区域,然后按照区域面积大小的顺序排列显著度程度,即疑似病灶区域;Using the u and v components of the Luv color space image obtained in step c as input, calculate the color feature saliency map uv(c,s), use the L component as input to calculate the brightness feature saliency map L(c,s), and then use the Lapp The Lass transform algorithm and the virtual connection method are used to obtain the edge area of the salient content in the image, calculate the contour feature saliency map O(c,s) and the texture feature map T(c,s), and convert the obtained color feature saliency map uv (c, s), luminance feature saliency map L(c, s), contour feature saliency map O(c, s) and texture feature saliency map T(c, s) are respectively regularized and fused at multiple scales, Obtain the saliency map S of the image, and then use the etching algorithm to filter out the salient areas with small areas, and then arrange the saliency degrees in the order of the area size, that is, the suspected lesion area;
e.构造特征向量e. Construct eigenvectors
以步骤d所得到的显著图S为输入,在疑似病灶区域内构造像素颜色特征向量V(uv)和亮度特征向量V(L),计算并构造区域轮廓特征向量V(O)纹理特征向量V(T);Taking the saliency map S obtained in step d as input, construct the pixel color feature vector V(uv) and brightness feature vector V(L) in the suspected lesion area, calculate and construct the area contour feature vector V(O) and the texture feature vector V (T);
f.病灶区域进行模式识别f. Pattern recognition of lesion area
以步骤e所构建的特征向量作为输入,采用脉冲耦合神经网络进行模式识别,得到待识别疑似区域的病灶模式,即正常模式和病灶模式;Using the feature vector constructed in step e as input, using a pulse-coupled neural network to perform pattern recognition to obtain the lesion pattern of the suspected area to be identified, that is, the normal pattern and the lesion pattern;
g.区域转移g. Regional transfer
按照图像中疑似病灶区域大小的顺序分别进行识别,如果还有其他疑似病灶区域,重复步骤e、f进行模式识别,直到所有疑似病灶区域识别结束;Recognize according to the order of the size of the suspected lesion areas in the image. If there are other suspected lesion areas, repeat steps e and f for pattern recognition until the identification of all suspected lesion areas is completed;
h.病灶图像分类提取h. Lesion image classification and extraction
将内窥图像中所有疑似病灶区域的模式结果进行或运算,得到该幅内窥图像的分类模式,即图像正常模式和图像病灶模式,如果为病灶模式标记病灶区域;i.重复步骤b、c、d、e、f、g、h,直至整个视频文件的内窥图像识别结束。Perform an OR operation on the mode results of all suspected lesion areas in the endoscopic image to obtain the classification mode of the endoscopic image, that is, the image normal mode and the image lesion mode, if the lesion mode marks the lesion area; i. Repeat steps b and c , d, e, f, g, h, until the endoscopic image recognition of the entire video file ends.
进一步,所述步骤e中,在内窥图像显著图S的显著区域内,构造像素颜色特征向量V′(uv)和亮度特征向量V′(L),然后通过Sigmoid核函数映射到高维空间,运用主成分分析(PCA)的方法提取特征数据的核主成分特征,得到降维的颜色特征向量V(uv)和亮度特征向量V(L),同时,在区域内计算并构造区域轮廓特征向量V(O)纹理特征向量V(T),组建特征矩阵,进行病灶的模式识别。Further, in the step e, in the salient region of the endoscopic image saliency map S, construct the pixel color feature vector V'(uv) and the brightness feature vector V'(L), and then map to the high-dimensional space through the Sigmoid kernel function , use the principal component analysis (PCA) method to extract the core principal component features of the feature data, and obtain the dimensionality-reduced color feature vector V(uv) and brightness feature vector V(L). At the same time, calculate and construct the area contour feature in the area The vector V(O) texture feature vector V(T) is used to construct a feature matrix for pattern recognition of lesions.
再进一步,所述步骤f中,脉冲耦合神经网络由输入层、交叉皮层模型(ICM)的神经元层、竞争输出层组成;Further, in the step f, the pulse-coupled neural network is composed of the neuron layer of the input layer, the intercortical model (ICM), and the competition output layer;
所述输入层由颜色特征输入、亮度特征输入、轮廓特征输入、纹理特征输入共四个输入通道,交叉皮层模型的神经元层采用四个ICM神经元,竞争输出层由竞争神经元权重矩阵LW和竞争函数C组成;The input layer consists of four input channels including color feature input, brightness feature input, contour feature input, and texture feature input. The neuron layer of the cross-cortical model adopts four ICM neurons, and the competition output layer consists of the competition neuron weight matrix LW and the competition function C;
所述输入层的颜色特征输入、亮度特征输入通道与ICM神经元层的1号ICM神经元输入连接,轮廓特征输入、纹理特征输入通道与ICM神经元层的2号ICM神经元输入连接,1号和2号ICM神经元分别与3号和4号ICM神经元输入相互连接,3号、4号ICM神经元与竞争输出层的竞争神经元权重矩阵LW输入连接,竞争神经元权重矩阵LW的输出与竞争层神经元竞争函数C输入连接;The color feature input and brightness feature input channels of the input layer are connected to the No. 1 ICM neuron input of the ICM neuron layer, and the contour feature input and texture feature input channels are connected to the No. 2 ICM neuron input of the ICM neuron layer. No. 3 and No. 2 ICM neurons are respectively connected to the input of No. 3 and No. 4 ICM neurons, No. 3 and No. 4 ICM neurons are connected to the input of the competition neuron weight matrix LW of the competition output layer, and the weight matrix LW of the competition neuron The output is connected to the input of the competition function C of the competition layer neurons;
更进一步,所述的ICM神经元包括两个耦合振荡器,连接加权系数矩阵设置为W。Furthermore, the ICM neuron includes two coupled oscillators, and the connection weight coefficient matrix is set to W.
所述竞争神经元权重矩阵LW为2维矢量,同时只有一个元素为1,其它都为0,并且,竞争层神经元竞争函数采用高斯函数,输出一个2维的向量,其中相似概率最大的那种模式类对应的元素被设置为1,其它都为0,1所出现的位置就指示了输入特征矩阵被识别的类别,即正常模式或病灶模式。The competition neuron weight matrix LW is a 2-dimensional vector, and at the same time, only one element is 1, and the others are 0, and the competition function of the competition layer neurons uses a Gaussian function to output a 2-dimensional vector, and the one with the highest similarity probability The elements corresponding to one mode class are set to 1, and the others are 0, and the position where 1 appears indicates the recognized class of the input feature matrix, that is, normal mode or lesion mode.
与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:
1,本发明的内窥图像中病灶模式识别方法中采用了基于人类视觉注意机制疑似病灶区域定位,该定位将极大降低内窥图像中人工神经网络的计算量,进而提高内窥图像中病灶模式识别的准确度。1. The lesion pattern recognition method in the endoscopic image of the present invention adopts the positioning of the suspected lesion area based on the human visual attention mechanism, which will greatly reduce the calculation amount of the artificial neural network in the endoscopic image, and further improve the accuracy of the lesion in the endoscopic image. Accuracy of pattern recognition.
2,本发明的图像处理方法在基于视觉感知的颜色Luv空间,最大程度地利用了内窥图像的颜色信息,而颜色信息是病灶区域诊断的重要信息,提高了病灶区域确定的准确度和特异度。2. The image processing method of the present invention utilizes the color information of the endoscopic image to the greatest extent in the color Luv space based on visual perception, and the color information is important information for the diagnosis of the lesion area, which improves the accuracy and specificity of the lesion area determination. Spend.
3,本发明的模式识别方法不区分病灶的具体类型,将所有区域分类为病灶模式和正常模式,并采用脉冲耦合神经网络,将极大提高内窥图像中病灶识别的准确性和实用性,降低临床医生的工作量。3. The pattern recognition method of the present invention does not distinguish the specific types of lesions, and classifies all areas into lesion patterns and normal patterns, and uses pulse-coupled neural networks, which will greatly improve the accuracy and practicability of lesion recognition in endoscopic images, Reduce clinician workload.
附图说明Description of drawings
图1为本发明的基于脉冲耦合神经网络的病灶识别方法流程图。Fig. 1 is a flow chart of the lesion identification method based on the pulse coupled neural network of the present invention.
图2为脉冲耦合神经网络的结构图。Figure 2 is a structural diagram of a pulse-coupled neural network.
图3为ICM神经元结构图。Figure 3 is a structural diagram of ICM neurons.
具体实施方式detailed description
下面结合附图对本发明的实施例作详细说明。Embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings.
参照图1~图3,一种基于脉冲耦合神经网络的内窥图像中病灶识别方法,包括如下步骤:Referring to Figures 1 to 3, a pulse-coupled neural network-based method for identifying lesions in endoscopic images includes the following steps:
a.视频分幅a. Video framing
采用Given公司的胶囊内窥镜检测系统的检测视频文件,将内窥镜检测视频输入,分幅得到位图格式的单幅内窥图像;Using the detection video file of Given’s capsule endoscope detection system, input the endoscope detection video, and obtain a single endoscopic image in bitmap format by framing;
b.图像预处理b. Image preprocessing
将步骤a所得到的位图图像按内窥镜的视野参数将图像的边沿黑边平滑处理,得到边界清晰的内窥图像,然后采用巴特沃夫高通滤波器滤波再采用中值滤波器进行滤波,去掉待处理图像区域的噪声并保留图像高频部分;The bitmap image obtained in step a is smoothed according to the field of view parameters of the endoscope to obtain an endoscopic image with clear boundaries, and then filtered by a Butterworth high-pass filter and then by a median filter , remove the noise in the image area to be processed and retain the high frequency part of the image;
c.面向视觉感知的颜色空间转换c. Color space conversion for visual perception
步骤b中得到的位图图像为面向设备的RGB颜色空间,将其转换到面向视觉感知的Luv颜色空间;The bitmap image obtained in step b is a device-oriented RGB color space, which is converted to a visual perception-oriented Luv color space;
d.疑似病灶区域定位d. Localization of suspected lesions
以步骤c得到的Luv颜色空间图像的u、v分量作为输入,计算颜色特征显著图uv(c,s),以L分量作为输入计算亮度特征显著图L(c,s),然后使用拉普拉斯变换算法和虚连方法,得到图像中显著内容的边沿区域,计算轮廓特征显著图O(c,s)和纹理特征图显著T(c,s),将所得到的颜色特征显著图uv(c,s),亮度特征显著图L(c,s),轮廓特征显著图O(c,s)和纹理特征显著图T(c,s)分别在多尺度下进行规则化运算并融合,得到图像的显著度图S,然后采用蚀刻算法过滤掉面积较小的显著区域,然后按照区域面积大小的顺序排列显著度程度,即疑似病灶区域;Using the u and v components of the Luv color space image obtained in step c as input, calculate the color feature saliency map uv(c,s), use the L component as input to calculate the brightness feature saliency map L(c,s), and then use the Lapp The Lass transform algorithm and the virtual connection method are used to obtain the edge area of the salient content in the image, calculate the contour feature saliency map O(c,s) and the texture feature map T(c,s), and convert the obtained color feature saliency map uv (c, s), luminance feature saliency map L(c, s), contour feature saliency map O(c, s) and texture feature saliency map T(c, s) are respectively regularized and fused at multiple scales, Obtain the saliency map S of the image, and then use the etching algorithm to filter out the salient areas with small areas, and then arrange the saliency degrees in the order of the area size, that is, the suspected lesion area;
e.构造特征向量e. Construct eigenvectors
以步骤d所得到的显著图S为输入,在疑似病灶区域内构造像素颜色特征向量V(uv)和亮度特征向量V(L),计算并构造区域轮廓特征向量V(O)纹理特征向量V(T);Taking the saliency map S obtained in step d as input, construct the pixel color feature vector V(uv) and brightness feature vector V(L) in the suspected lesion area, calculate and construct the area contour feature vector V(O) and the texture feature vector V (T);
f.病灶区域进行模式识别f. Pattern recognition of lesion area
以步骤e所构建的特征向量作为输入,采用脉冲耦合神经网络进行模式识别,得到待识别疑似区域的病灶模式,即正常模式和病灶模式;Using the feature vector constructed in step e as input, using a pulse-coupled neural network to perform pattern recognition to obtain the lesion pattern of the suspected area to be identified, that is, the normal pattern and the lesion pattern;
g.区域转移g. Regional transfer
按照图像中疑似病灶区域大小的顺序分别进行识别,如果还有其他疑似病灶区域,重复步骤e、f进行模式识别,直到所有疑似病灶区域识别结束;Recognize according to the order of the size of the suspected lesion areas in the image. If there are other suspected lesion areas, repeat steps e and f for pattern recognition until the identification of all suspected lesion areas is completed;
h.病灶图像分类提取h. Lesion image classification and extraction
将内窥图像中所有疑似病灶区域的模式结果进行或运算,得到该幅内窥图像的分类模式,即图像正常模式和图像病灶模式,如果为病灶模式标记病灶区域;i.重复步骤b、c、d、e、f、g、h,直至整个视频文件的内窥图像识别结束。Perform an OR operation on the mode results of all suspected lesion areas in the endoscopic image to obtain the classification mode of the endoscopic image, that is, the image normal mode and the image lesion mode, if the lesion mode marks the lesion area; i. Repeat steps b and c , d, e, f, g, h, until the endoscopic image recognition of the entire video file ends.
进一步,所述步骤e中,在内窥图像显著图S的显著区域内,构造像素颜色特征向量V′(uv)和亮度特征向量V′(L),然后通过Sigmoid核函数映射到高维空间,运用主成分分析(PCA)的方法提取特征数据的核主成分特征,得到降维的颜色特征向量V(uv)和亮度特征向量V(L),同时,在区域内计算并构造区域轮廓特征向量V(O)纹理特征向量V(T),组建特征矩阵,进行病灶的模式识别。Further, in the step e, in the salient region of the endoscopic image saliency map S, construct the pixel color feature vector V'(uv) and the brightness feature vector V'(L), and then map to the high-dimensional space through the Sigmoid kernel function , use the principal component analysis (PCA) method to extract the core principal component features of the feature data, and obtain the dimensionality-reduced color feature vector V(uv) and brightness feature vector V(L). At the same time, calculate and construct the area contour feature in the area The vector V(O) texture feature vector V(T) is used to construct a feature matrix for pattern recognition of lesions.
进一步,所述步骤f中,脉冲耦合神经网络由输入层、交叉皮层模型(ICM)的神经元层、竞争输出层组成;Further, in the step f, the pulse-coupled neural network is composed of an input layer, a neuron layer of an intercortical model (ICM), and a competitive output layer;
所述输入层由颜色特征输入、亮度特征输入、轮廓特征输入、纹理特征输入共四个输入通道,ICM神经元层采用四个ICM神经元,竞争输出层由竞争神经元权重矩阵LW和竞争函数C组成;The input layer consists of four input channels including color feature input, brightness feature input, outline feature input, and texture feature input. The ICM neuron layer uses four ICM neurons. The competition output layer consists of the competition neuron weight matrix LW and the competition function C composition;
所述输入层的颜色特征输入、亮度特征输入通道与ICM神经元层的1号ICM神经元输入连接,轮廓特征输入、纹理特征输入通道与ICM神经元层的2号ICM神经元输入连接,1号和2号ICM神经元分别与3号和4号ICM神经元输入相互连接,3号、4号ICM神经元与竞争输出层的竞争神经元权重矩阵LW输入连接,竞争神经元权重矩阵LW的输出与竞争层神经元竞争函数C输入连接;The color feature input and brightness feature input channels of the input layer are connected to the No. 1 ICM neuron input of the ICM neuron layer, and the contour feature input and texture feature input channels are connected to the No. 2 ICM neuron input of the ICM neuron layer. No. 3 and No. 2 ICM neurons are respectively connected to the input of No. 3 and No. 4 ICM neurons, No. 3 and No. 4 ICM neurons are connected to the input of the competition neuron weight matrix LW of the competition output layer, and the weight matrix LW of the competition neuron The output is connected to the input of the competition function C of the competition layer neurons;
再进一步,所述ICM神经元包括两个耦合振荡器,连接加权系数矩阵设置为W。Still further, the ICM neuron includes two coupled oscillators, and the connection weight coefficient matrix is set to W.
所述竞争神经元权重矩阵LW为2维矢量,同时只有一个元素为1,其它都为0,并且,竞争层神经元竞争函数采用高斯函数,输出一个2维的向量,其中相似概率最大的那种模式类对应的元素被设置为1,其它都为0,1所出现的位置就指示了输入特征矩阵被识别的类别,即正常模式或病灶模式。The competition neuron weight matrix LW is a 2-dimensional vector, and at the same time, only one element is 1, and the others are 0, and the competition function of the competition layer neurons uses a Gaussian function to output a 2-dimensional vector, and the one with the highest similarity probability The elements corresponding to one mode class are set to 1, and the others are 0, and the position where 1 appears indicates the recognized class of the input feature matrix, that is, normal mode or lesion mode.
最后,还需要注意的是,以上列举的仅是本发明的一个具体实施例。显然,本发明不限于以上实施例,还可以有许多变形。本领域的普通技术人员能从本发明公开的内容直接导出或联想到的所有变形,均应认为是本发明的保护范围。Finally, it should also be noted that what is listed above is only a specific embodiment of the present invention. Obviously, the present invention is not limited to the above embodiments, and many variations are possible. All deformations that can be directly derived or associated by those skilled in the art from the content disclosed in the present invention should be considered as the protection scope of the present invention.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610207950.0A CN105913075A (en) | 2016-04-05 | 2016-04-05 | Endoscopic image focus identification method based on pulse coupling nerve network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610207950.0A CN105913075A (en) | 2016-04-05 | 2016-04-05 | Endoscopic image focus identification method based on pulse coupling nerve network |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN105913075A true CN105913075A (en) | 2016-08-31 |
Family
ID=56745336
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610207950.0A Pending CN105913075A (en) | 2016-04-05 | 2016-04-05 | Endoscopic image focus identification method based on pulse coupling nerve network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105913075A (en) |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107274428A (en) * | 2017-08-03 | 2017-10-20 | 汕头市超声仪器研究所有限公司 | Multi-target three-dimensional ultrasonic image partition method based on emulation and measured data |
| CN107705852A (en) * | 2017-12-06 | 2018-02-16 | 北京华信佳音医疗科技发展有限责任公司 | Real-time the lesion intelligent identification Method and device of a kind of medical electronic endoscope |
| CN109241898A (en) * | 2018-08-29 | 2019-01-18 | 合肥工业大学 | Object localization method and system, the storage medium of hysteroscope video |
| CN109411084A (en) * | 2018-11-28 | 2019-03-01 | 武汉大学人民医院(湖北省人民医院) | A kind of intestinal tuberculosis assistant diagnosis system and method based on deep learning |
| CN110706225A (en) * | 2019-10-14 | 2020-01-17 | 山东省肿瘤防治研究院(山东省肿瘤医院) | Artificial intelligence-based tumor identification system |
| CN110705440A (en) * | 2019-09-27 | 2020-01-17 | 贵州大学 | A Capsule Endoscopy Image Recognition Model Based on Neural Network Feature Fusion |
| CN110706220A (en) * | 2019-09-27 | 2020-01-17 | 贵州大学 | Capsule endoscope image processing and analyzing method |
| CN110910325A (en) * | 2019-11-22 | 2020-03-24 | 深圳信息职业技术学院 | A medical image processing method and device based on artificial butterfly optimization algorithm |
| CN111507454A (en) * | 2019-01-30 | 2020-08-07 | 兰州交通大学 | Improved cross cortical neural network model for remote sensing image fusion |
| CN111784683A (en) * | 2020-07-10 | 2020-10-16 | 天津大学 | Pathological slice detection method and device, computer equipment and storage medium |
| CN112136140A (en) * | 2018-05-14 | 2020-12-25 | 诺基亚技术有限公司 | Method and apparatus for image recognition |
| CN112184837A (en) * | 2020-09-30 | 2021-01-05 | 百度(中国)有限公司 | Image detection method and device, electronic equipment and storage medium |
| CN113920042A (en) * | 2021-09-24 | 2022-01-11 | 深圳市资福医疗技术有限公司 | Image processing system and capsule endoscope |
| CN114240941A (en) * | 2022-02-25 | 2022-03-25 | 浙江华诺康科技有限公司 | Endoscope image noise reduction method, device, electronic apparatus, and storage medium |
| CN115100485A (en) * | 2022-05-27 | 2022-09-23 | 国网上海市电力公司 | Instrument abnormal state identification method based on power inspection robot |
| CN115798725A (en) * | 2022-10-27 | 2023-03-14 | 佛山读图科技有限公司 | Method for making lesion-containing human body simulation image data for nuclear medicine |
| CN118887472A (en) * | 2024-07-23 | 2024-11-01 | 兰州交通大学 | A breast image recognition and judgment method based on SFC-MSPCNN and improved ConvNeXt |
-
2016
- 2016-04-05 CN CN201610207950.0A patent/CN105913075A/en active Pending
Cited By (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019023819A1 (en) * | 2017-08-03 | 2019-02-07 | 汕头市超声仪器研究所有限公司 | Simulated and measured data-based multi-target three-dimensional ultrasound image segmentation method |
| CN107274428A (en) * | 2017-08-03 | 2017-10-20 | 汕头市超声仪器研究所有限公司 | Multi-target three-dimensional ultrasonic image partition method based on emulation and measured data |
| US11282204B2 (en) | 2017-08-03 | 2022-03-22 | Shantou Institute Of Ultrasonic Instruments Co., Ltd. | Simulated and measured data-based multi-target three-dimensional ultrasound image segmentation method |
| CN107705852A (en) * | 2017-12-06 | 2018-02-16 | 北京华信佳音医疗科技发展有限责任公司 | Real-time the lesion intelligent identification Method and device of a kind of medical electronic endoscope |
| CN112136140A (en) * | 2018-05-14 | 2020-12-25 | 诺基亚技术有限公司 | Method and apparatus for image recognition |
| CN109241898B (en) * | 2018-08-29 | 2020-09-22 | 合肥工业大学 | Target positioning method and system and storage medium for endoscopic imaging |
| CN109241898A (en) * | 2018-08-29 | 2019-01-18 | 合肥工业大学 | Object localization method and system, the storage medium of hysteroscope video |
| CN109411084A (en) * | 2018-11-28 | 2019-03-01 | 武汉大学人民医院(湖北省人民医院) | A kind of intestinal tuberculosis assistant diagnosis system and method based on deep learning |
| CN111507454B (en) * | 2019-01-30 | 2022-09-06 | 兰州交通大学 | Improved cross cortical neural network model for remote sensing image fusion |
| CN111507454A (en) * | 2019-01-30 | 2020-08-07 | 兰州交通大学 | Improved cross cortical neural network model for remote sensing image fusion |
| CN110705440B (en) * | 2019-09-27 | 2022-11-01 | 贵州大学 | Capsule endoscopy image recognition model based on neural network feature fusion |
| CN110705440A (en) * | 2019-09-27 | 2020-01-17 | 贵州大学 | A Capsule Endoscopy Image Recognition Model Based on Neural Network Feature Fusion |
| CN110706220B (en) * | 2019-09-27 | 2023-04-18 | 贵州大学 | Capsule endoscope image processing and analyzing method |
| CN110706220A (en) * | 2019-09-27 | 2020-01-17 | 贵州大学 | Capsule endoscope image processing and analyzing method |
| CN110706225A (en) * | 2019-10-14 | 2020-01-17 | 山东省肿瘤防治研究院(山东省肿瘤医院) | Artificial intelligence-based tumor identification system |
| CN110910325A (en) * | 2019-11-22 | 2020-03-24 | 深圳信息职业技术学院 | A medical image processing method and device based on artificial butterfly optimization algorithm |
| CN111784683B (en) * | 2020-07-10 | 2022-05-17 | 天津大学 | Pathological section detection method and device, computer equipment and storage medium |
| CN111784683A (en) * | 2020-07-10 | 2020-10-16 | 天津大学 | Pathological slice detection method and device, computer equipment and storage medium |
| CN112184837A (en) * | 2020-09-30 | 2021-01-05 | 百度(中国)有限公司 | Image detection method and device, electronic equipment and storage medium |
| CN112184837B (en) * | 2020-09-30 | 2024-09-24 | 百度(中国)有限公司 | Image detection method and device, electronic equipment and storage medium |
| CN113920042A (en) * | 2021-09-24 | 2022-01-11 | 深圳市资福医疗技术有限公司 | Image processing system and capsule endoscope |
| CN113920042B (en) * | 2021-09-24 | 2023-04-18 | 深圳市资福医疗技术有限公司 | Image processing system and capsule endoscope |
| CN114240941B (en) * | 2022-02-25 | 2022-05-31 | 浙江华诺康科技有限公司 | Endoscope image noise reduction method, device, electronic apparatus, and storage medium |
| CN114240941A (en) * | 2022-02-25 | 2022-03-25 | 浙江华诺康科技有限公司 | Endoscope image noise reduction method, device, electronic apparatus, and storage medium |
| CN115100485A (en) * | 2022-05-27 | 2022-09-23 | 国网上海市电力公司 | Instrument abnormal state identification method based on power inspection robot |
| CN115798725A (en) * | 2022-10-27 | 2023-03-14 | 佛山读图科技有限公司 | Method for making lesion-containing human body simulation image data for nuclear medicine |
| CN115798725B (en) * | 2022-10-27 | 2024-03-26 | 佛山读图科技有限公司 | Method for manufacturing human body simulation image data with lesion for nuclear medicine |
| CN118887472A (en) * | 2024-07-23 | 2024-11-01 | 兰州交通大学 | A breast image recognition and judgment method based on SFC-MSPCNN and improved ConvNeXt |
| CN118887472B (en) * | 2024-07-23 | 2025-01-24 | 兰州交通大学 | A breast image recognition and judgment method based on SFC-MSPCNN and improved ConvNeXt |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105913075A (en) | Endoscopic image focus identification method based on pulse coupling nerve network | |
| Ali et al. | A survey of feature extraction and fusion of deep learning for detection of abnormalities in video endoscopy of gastrointestinal-tract | |
| Li et al. | Computer-aided detection of bleeding regions for capsule endoscopy images | |
| Li et al. | Computer-based detection of bleeding and ulcer in wireless capsule endoscopy images by chromaticity moments | |
| JP2022545124A (en) | Gastrointestinal Early Cancer Diagnosis Support System and Examination Device Based on Deep Learning | |
| CN114708258B (en) | Eye fundus image detection method and system based on dynamic weighted attention mechanism | |
| CN117274270B (en) | Digestive endoscope real-time auxiliary system and method based on artificial intelligence | |
| CN110189303B (en) | NBI image processing method based on deep learning and image enhancement and application thereof | |
| Naz et al. | Detection and classification of gastrointestinal diseases using machine learning | |
| CN107730489A (en) | Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method | |
| CN114372951A (en) | Nasopharyngeal carcinoma localization and segmentation method and system based on image segmentation convolutional neural network | |
| CN103984957A (en) | Automatic early warning system for suspicious lesion area of capsule endoscope image | |
| CN105469383A (en) | Wireless capsule endoscopy redundant image screening method based on multi-feature fusion | |
| US12315143B2 (en) | System and method of using right and left eardrum otoscopy images for automated otoscopy image analysis to diagnose ear pathology | |
| Li et al. | Comparison of several texture features for tumor detection in CE images | |
| Lei et al. | Automated detection of retinopathy of prematurity by deep attention network | |
| CN109241963B (en) | Intelligent recognition method of bleeding points in capsule gastroscopic images based on Adaboost machine learning | |
| CN115018767A (en) | Cross-modal endoscopic image conversion and lesion segmentation method based on intrinsic representation learning | |
| CN116205814A (en) | Medical endoscope image enhancement method, system and computer equipment | |
| Ghosh et al. | Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy | |
| Özbay | Gastrointestinal tract disease classification using residual-inception transformer with wireless capsule endoscopy images segmentation | |
| CN114332910A (en) | A Human Body Parts Segmentation Method Based on Similar Feature Computation for Far Infrared Images | |
| Lange et al. | Computer-aided-diagnosis (CAD) for colposcopy | |
| Li et al. | Computer aided detection of bleeding in capsule endoscopy images | |
| Cao et al. | Transformer for computer-aided diagnosis of laryngeal carcinoma in pcle images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160831 |