[go: up one dir, main page]

CN111508057A - Trachea model reconstruction method and system by using computer vision and deep learning technology - Google Patents

Trachea model reconstruction method and system by using computer vision and deep learning technology Download PDF

Info

Publication number
CN111508057A
CN111508057A CN201910334929.0A CN201910334929A CN111508057A CN 111508057 A CN111508057 A CN 111508057A CN 201910334929 A CN201910334929 A CN 201910334929A CN 111508057 A CN111508057 A CN 111508057A
Authority
CN
China
Prior art keywords
image
module
trachea
feature points
image feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910334929.0A
Other languages
Chinese (zh)
Inventor
卢昭全
王友光
陈威廷
许斐凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN111508057A publication Critical patent/CN111508057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Endoscopes (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a trachea model reconstruction method by utilizing computer vision and deep learning technology, which comprises a step of obtaining a trachea wall image, a step of loading image data, a step of processing images, a step of capturing image characteristics, a step of comparing the images, a step of estimating pose and converting space, and a step of reconstructing a three-dimensional trachea model; therefore, the trachea model reconstruction system capable of reconstructing and recording the three-dimensional trachea model accurately and quickly is provided.

Description

Trachea model reconstruction method and system by using computer vision and deep learning technology
Technical Field
The invention relates to a trachea model reconstruction method and a trachea model reconstruction system by utilizing computer vision and deep learning technology, in particular to a trachea model reconstruction method and a trachea model reconstruction system which can accurately and quickly reconstruct and record a three-dimensional trachea model.
Background
When patients are under general anesthesia, cardiopulmonary resuscitation or cannot breathe by themselves during operation, intubation treatment is required to be carried out on the patients so as to insert the artificial airway into the trachea and ensure that medical gas is smoothly sent into the trachea of the patients. When the intubation treatment is carried out, medical staff cannot directly visually observe and adjust the artificial airway, and only can operate by relying on the touch feeling and past experience of the medical staff, so that the trachea of a patient is prevented from being stabbed, the operation is required to be carried out for multiple times, and the time for establishing the unobstructed airway is delayed. Therefore, the rapid and accurate establishment of the three-dimensional tracheal model for medical staff to assist intubation is a problem to be solved at present.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and provides a method and a system for reconstructing a trachea model using computer vision and deep learning techniques, which can accurately and rapidly reconstruct and record a three-dimensional trachea model.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a trachea model reconstruction method by using computer vision and deep learning technology comprises the following steps:
obtaining an image of the tracheal wall: the endoscope lens 70 is used to capture continuous images from the mouth to the trachea.
Loading the graph resources: loading and storing the continuous images shot and captured by the lens of the endoscope for subsequent processing;
image processing: denoising and denoising the shot and captured continuous images, and enhancing image details through image enhancement processing to obtain clear images;
image feature acquisition: capturing and screening the image through the characteristic capturing mode of the extreme value of the area of the clear image after the image processing step, and then storing the captured and screened image characteristic points;
image comparison: comparing the image characteristic points of the two continuous images which are connected after the image characteristic acquisition processing, finding out the common image characteristic points contained in the images and recording and storing the common image characteristic points;
pose estimation and spatial conversion: the common image feature points are subjected to auxiliary identification by deep learning, so that the position and the posture of the trachea in a three-dimensional space when the endoscope lens shoots the common image feature points are estimated, and the spatial information of the depth and the angle when the endoscope lens extends into the trachea for shooting is converted;
reconstructing a three-dimensional trachea model: projecting the common image characteristic points processed in the image comparison step to a three-dimensional space, and reconstructing and recording the spatial information of the shooting depth and angle of the endoscope lens obtained in the pose estimation and space conversion step into an actual three-dimensional trachea model;
by means of the method, the three-dimensional trachea model can be quickly and correctly reconstructed, and therefore, the method can assist personnel in intubation treatment.
Therefore, the trachea model reconstruction method can be used for accurately and quickly reconstructing and recording the three-dimensional trachea model for subsequent medical research or use.
The invention has the beneficial effect that the three-dimensional trachea model can be correctly and quickly reconstructed and recorded.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a flow chart of the steps of the present invention.
Fig. 2 is a system block diagram of the present invention.
FIG. 3 is a block diagram of a system incorporating an endoscope lens according to the present invention.
The reference numbers in the figures illustrate:
10 figure data loading module
20 image processing module
30 image feature capturing module
40 image comparison module
50 pose estimation algorithm module
60 three-dimensional model reconstruction module
70 endoscope lens
Detailed Description
The technical means and the structure thereof applied to achieve the object of the present invention will be described in detail with reference to the embodiments shown in fig. 1 to 3 as follows:
as shown in fig. 1, the trachea model reconstruction method using computer vision and deep learning technology in the embodiment includes the following steps:
obtaining an image of the tracheal wall: the endoscope lens 70 is used to capture continuous images from the mouth to the trachea.
Loading the graph resources: the continuous images captured by the endoscope lens 70 are loaded and stored for subsequent processing.
Image processing: the continuous images captured are denoised and denoised, and the image is enhanced to emphasize the image details to obtain clear images.
Image feature acquisition: the method comprises extracting and screening feature points of the images of the continuous images after image processing by feature extraction method (such as SIFT, SURF, ORB, etc.) of local extremum, and storing the extracted and screened image feature points.
Image comparison: and comparing the image characteristic points of the two continuous images which are connected after the image characteristic acquisition processing, finding out the common image characteristic points contained in the images, and recording and storing the common image characteristic points.
Pose estimation and spatial conversion: the common image feature points are recognized in an auxiliary manner by deep learning, so that the position and the posture of the trachea in the three-dimensional space reached when the endoscope lens 70 shoots the common image feature points are estimated, and the spatial information of the depth and the angle when the endoscope lens 70 extends into the trachea to shoot is converted.
Reconstructing a three-dimensional trachea model: and projecting the common image characteristic points processed in the image comparison step to a three-dimensional space, and reconstructing and recording the spatial information of the shooting depth and angle of the endoscope lens 70 obtained in the pose estimation and space conversion steps into an actual three-dimensional trachea model.
By means of the method, the three-dimensional trachea model can be quickly and correctly reconstructed, and therefore, the method can assist personnel in intubation treatment.
In order to achieve the above method, the model reconstruction system of the present invention, with reference to the embodiments shown in fig. 2 to 3, is described in detail as follows:
as shown in fig. 2, the trachea model reconstruction system using computer vision and deep learning technology of the present invention includes a map data loading module 10, an image processing module 20, an image feature capturing module 30, an image comparison module 40, a pose estimation algorithm module 50 and a three-dimensional model reconstruction module 60; wherein:
the image loading module 10 (please refer to fig. 3) is connected to the endoscope lens 70 for loading and storing the continuous images captured by the endoscope lens 70 from the oral cavity into the trachea for subsequent processing.
The image processing module 20 (please refer to fig. 3) is connected to the image loading module 10, and is configured to receive the continuous images loaded by the image loading module 10, perform denoising and denoising on the continuous images, and emphasize image details by using an image enhancement technique to obtain a clear image.
The image feature capturing module 30 (please refer to fig. 3) is connected to the image processing module 20, and is used for capturing and screening the feature points of the clear image processed by the image processing module 20 through the feature capturing manner of the local extremum, and storing the captured and screened image feature points.
In view of the above, the feature extraction method of the local extremum may be Scale-invariant feature transform (SIFT), Speeded Up Robust Features (SURF), fast feature point extraction and description (orinterfast and rotaed BRIEF, ORB), and the like.
The image comparison module 40 (please refer to fig. 3) is connected to the image feature capturing module 30 for receiving the image feature points captured and screened by the image feature capturing module 30, and comparing two consecutive images to find out the common image feature points and record and store the common image feature points.
The pose estimation algorithm module 50 (please refer to fig. 3) has a deep learning function, is connected to the image comparison module 40, and is configured to receive the common image feature points found by the image comparison module 40, and utilize the deep learning model to achieve auxiliary identification, so as to estimate the position and posture of the trachea reached by the endoscope lens 70 in the three-dimensional space when capturing the captured image according to the common feature points in the continuous images, thereby further converting the spatial information of the depth and angle of the endoscope lens 70 extending into the trachea to capture the image.
The three-dimensional model reconstructing module 60 (please refer to fig. 3) is connected to the image comparing module 40 and the pose estimation algorithm module 50, and is configured to receive the common image feature points found by the image comparing module 40 and the spatial information converted by the pose estimation algorithm module 50, so as to project all the image feature points to a three-dimensional space, and to reconstruct and record a complete three-dimensional trachea model by using the spatial information obtained by the pose estimation algorithm module 50.
In the pose estimation and spatial conversion step and the pose estimation and calculation module 50, the trachea image data of a plurality of patients are captured to obtain image feature points, and the image feature points and the captured images are input into a deep learning model, wherein the deep learning model can be selected from the types of supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning and the like (such as neural network, random forest, SVM (support vector machine), decision tree or cluster and the like), and the path trajectory of the depth, angle, path position and direction of the endoscope lens extending into the trachea is identified through the deep learning model, and the features and the shape of the trachea wall can be identified.
Therefore, after the continuous images are shot by the endoscope lens, the images are subjected to denoising, denoising and image detail strengthening treatment, the common characteristic points are captured and compared by the image characteristic points, the position and posture information of the continuous images is obtained by the position and posture estimation with the deep learning function, the depth and angle information of the endoscope lens extending into the trachea is further obtained, the moving track of the endoscope lens can be described, and the three-dimensional trachea model is correctly and quickly reconstructed by using a characteristic capture mode of computer vision and Visual ranging (Visual Odometry) to be formed for intubation assistance and subsequent medical research or use.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still fall within the scope of the technical solution of the present invention.

Claims (2)

1.一种利用电脑视觉与深度学习技术的气管模型重建方法,其特征在于,包含有如下的步骤:1. a tracheal model reconstruction method utilizing computer vision and deep learning technology, is characterized in that, comprises following steps: 取得气管管壁的影像:利用内视镜镜头对口腔至气管拍摄撷取连续影像;Obtain images of the tracheal wall: use an endoscopic lens to capture continuous images from the mouth to the trachea; 图资载入:加载储存内视镜镜头所拍摄撷取的连续影像,以供后续处理使用;Image loading: load and store the continuous images captured by the endoscope lens for subsequent processing; 图像处理:对所拍摄撷取的连续影像进行去噪声与降噪处理,并经影像增强处理以强调影像细节;Image processing: de-noise and de-noise the captured continuous images, and perform image enhancement processing to emphasize image details; 影像特征撷取:对经图像处理步骤后的连续影像透过区域极值的特征撷取方式对影像作特征点撷取与筛选,然后储存该些撷取、筛选后的影像特征点;Image feature extraction: extract and filter the image feature points of the continuous images after the image processing step through the feature extraction method of the regional extreme value, and then store the captured and filtered image feature points; 影像比对:对经影像特征撷取处理后两两相连的连续影像进行影像特征点比对,找出所含有的共同影像特征点并予以记录储存;Image comparison: compare the image feature points of consecutive images that are connected in pairs after image feature extraction processing, find out the common image feature points contained and record and store them; 位姿估算及空间换算:将共同影像特征点利用深度学习达到辅助辨识,以估算出内视镜镜头拍摄共同影像特征点时所达气管的三维空间中的位置与姿势,并换算出内视镜镜头伸入气管拍摄时的深度与角度的空间信息;Pose estimation and space conversion: The common image feature points are used for auxiliary identification by deep learning, so as to estimate the position and posture of the trachea in the three-dimensional space of the trachea when the common image feature points are captured by the endoscope lens, and convert the endoscope Spatial information of depth and angle when the lens is inserted into the trachea; 重建三维气管模型:将经影像比对步骤处理后的共同影像特征点投影至三维空间,并配合位姿估算与空间换算步骤中所得的内视镜镜头拍摄深度与角度的空间信息,重建及记录成实际的立体三维气管模型。Reconstruct the 3D trachea model: Project the common image feature points processed by the image comparison step into the 3D space, and reconstruct and record the spatial information of the depth and angle of the endoscope lens obtained in the pose estimation and space conversion steps. into an actual three-dimensional trachea model. 2.一种利用电脑视觉与深度学习技术的气管模型重建系统,其特征在于,应用于上述利用电脑视觉与深度学习技术的气管模型重建方法,包含有一图资加载模组、图像处理模组、影像特征撷取模组、影像比对模组、位姿估算演算法模组与三维模型重建模组;其中:2. a tracheal model reconstruction system utilizing computer vision and deep learning technology, is characterized in that, is applied to the above-mentioned tracheal model reconstruction method utilizing computer vision and deep learning technology, comprises a picture material loading module, image processing module, Image feature extraction module, image comparison module, pose estimation algorithm module and 3D model reconstruction module; among them: 图资加载模组,与内视镜镜头连接,用以加载内视镜镜头进入气管所得的连续影像;The image loading module is connected with the endoscope lens to load the continuous images obtained by the endoscope lens entering the trachea; 图资加载模组,与内视镜镜头连接,用以载入、储存内视镜镜头从口腔进入气管所拍摄撷取的连续影像以供后续处理;The image loading module is connected with the endoscope lens, and is used to load and store the continuous images captured by the endoscope lens from the oral cavity into the trachea for subsequent processing; 图像处理模组,与图资加载模组连接,用以接收图资加载模组所加载的连续影像,以对该些连续影像作去噪声与降噪处理,并利用影像增强技术强调影像细节;The image processing module is connected with the image loading module, and is used for receiving the continuous images loaded by the image loading module, so as to perform denoising and noise reduction processing on these continuous images, and use the image enhancement technology to emphasize the image details; 影像特征撷取模组,与图像处理模组连接,用以将经图像处理模组处理后的连续影像透过区域极值的特征撷取方式对影像作特征点撷取与筛选,并储存该些撷取、筛选后的影像特征点;The image feature extraction module is connected with the image processing module, and is used for capturing and filtering the image feature points through the feature extraction method of the regional extreme value of the continuous images processed by the image processing module, and storing the image feature points. some captured and filtered image feature points; 影像比对模组,与影像特征撷取模组连接,用以接收影像特征撷取模组所撷取、筛选的影像特征点,并对两两相连的连续影像进行影像特征点比对,找出所含有的共同影像特征点予以记录储存;The image comparison module is connected with the image feature capture module, and is used for receiving the image feature points captured and screened by the image feature capture module, and compares the image feature points for the continuous images connected in pairs, and finds the image feature points. Record and store the common image feature points contained in it; 位姿估算演算法模组,与影像比对模组连接,具有深度学习功能,用以接收影像比对模组所找出的共同影像特征点,同时利用深度学习达到辅助辨识,而可依据连续影像中的共同特征点估算出内视镜镜头拍摄撷取影像时所到达气管在三维空间中的位置与姿势,进而可换算出内视镜镜头伸入气管拍摄影像的深度与角度的空间信息;The pose estimation algorithm module, connected with the image comparison module, has the function of deep learning, and is used to receive the common image feature points found by the image comparison module, and at the same time use deep learning to achieve auxiliary identification, and can be based on continuous The common feature points in the image estimate the position and posture of the trachea in the three-dimensional space that the endoscopic lens reaches when the image is captured, and then the spatial information of the depth and angle of the image taken by the endoscopic lens extending into the trachea can be converted; 三维模型重建模组,与影像比对模组及位姿估算演算法模组连接,用以接收影像比对模组所找出的共同影像特征点,及接收位姿估算演算法模组所换算出的空间信息,借以将所有影像特征点投影至三维空间,并辅以位姿估算演算法模组所得的空间信息,而可重建及记录完整的立体三维气管模型。The three-dimensional model reconstruction module is connected with the image comparison module and the pose estimation algorithm module, and is used to receive the common image feature points found by the image comparison module and receive the conversion from the pose estimation algorithm module The obtained spatial information can be used to project all image feature points into the three-dimensional space, and supplemented by the spatial information obtained by the pose estimation algorithm module, and a complete three-dimensional three-dimensional trachea model can be reconstructed and recorded.
CN201910334929.0A 2019-01-31 2019-04-24 Trachea model reconstruction method and system by using computer vision and deep learning technology Pending CN111508057A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW108201580U TWM584008U (en) 2019-01-31 2019-01-31 Trachea model reconstruction system utilizing computer vision and deep learning technology
TW108201580 2019-01-31

Publications (1)

Publication Number Publication Date
CN111508057A true CN111508057A (en) 2020-08-07

Family

ID=68620096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910334929.0A Pending CN111508057A (en) 2019-01-31 2019-04-24 Trachea model reconstruction method and system by using computer vision and deep learning technology

Country Status (2)

Country Link
CN (1) CN111508057A (en)
TW (1) TWM584008U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115719446A (en) * 2021-08-23 2023-02-28 财团法人车辆研究测试中心 Feature point integration positioning system and feature point integration positioning method
WO2024239125A1 (en) 2023-05-24 2024-11-28 Centre Hospitalier Universitaire Vaudois Apparatus and method for machine vision guided endotracheal intubation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011221988A (en) * 2010-03-24 2011-11-04 National Institute Of Advanced Industrial & Technology Three-dimensional position posture measurement device by stereo image, method and program
CN103371870A (en) * 2013-07-16 2013-10-30 深圳先进技术研究院 Multimode image based surgical operation navigation system
CN103412401A (en) * 2013-06-07 2013-11-27 中国科学院上海光学精密机械研究所 Endoscope and pipeline wall three-dimensional image reconstruction method
CN108363387A (en) * 2018-01-11 2018-08-03 驭势科技(北京)有限公司 Sensor control method and device
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011221988A (en) * 2010-03-24 2011-11-04 National Institute Of Advanced Industrial & Technology Three-dimensional position posture measurement device by stereo image, method and program
CN103412401A (en) * 2013-06-07 2013-11-27 中国科学院上海光学精密机械研究所 Endoscope and pipeline wall three-dimensional image reconstruction method
CN103371870A (en) * 2013-07-16 2013-10-30 深圳先进技术研究院 Multimode image based surgical operation navigation system
CN108363387A (en) * 2018-01-11 2018-08-03 驭势科技(北京)有限公司 Sensor control method and device
CN108534782A (en) * 2018-04-16 2018-09-14 电子科技大学 A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
彭小羽: "微创手术中内窥镜视觉SLAM方法研究", 《中国优秀博硕士学位论文全文数据库(电子期刊) 医药卫生科技辑》 *
揭云飞 等: "视觉SLAM系统分析", 《电脑知识与技术》 *
陈慧岩 等: "《无人驾驶车辆理论与设计》", 31 March 2018, 北京理工大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115719446A (en) * 2021-08-23 2023-02-28 财团法人车辆研究测试中心 Feature point integration positioning system and feature point integration positioning method
WO2024239125A1 (en) 2023-05-24 2024-11-28 Centre Hospitalier Universitaire Vaudois Apparatus and method for machine vision guided endotracheal intubation

Also Published As

Publication number Publication date
TWM584008U (en) 2019-09-21

Similar Documents

Publication Publication Date Title
CN107239728B (en) Unmanned aerial vehicle interaction device and method based on deep learning attitude estimation
CN104077579B (en) Facial expression recognition method based on expert system
US20120238866A1 (en) Method and System for Catheter Tracking in Fluoroscopic Images Using Adaptive Discriminant Learning and Measurement Fusion
US20200305847A1 (en) Method and system thereof for reconstructing trachea model using computer-vision and deep-learning techniques
CN105869166B (en) A kind of human motion recognition method and system based on binocular vision
CN108305283A (en) Human bodys' response method and device based on depth camera and basic form
CN112861588B (en) Living body detection method and device
CN112883940A (en) Silent in-vivo detection method, silent in-vivo detection device, computer equipment and storage medium
CN110598580A (en) Human face living body detection method
CN110477907B (en) Modeling method for intelligently assisting in recognizing epileptic seizures
CN110580454A (en) Liveness detection method and device
CN111508057A (en) Trachea model reconstruction method and system by using computer vision and deep learning technology
JP4691570B2 (en) Image processing apparatus and object estimation program
CN107506713A (en) Living body faces detection method and storage device
Ling et al. Virtual contour guided video object inpainting using posture mapping and retrieval
CN111860057A (en) Face image blurring and living body detection method, device, storage medium and device
CN112992340A (en) Disease early warning method, device, equipment and storage medium based on behavior recognition
CN110430416B (en) Free viewpoint image generation method and device
WO2020012935A1 (en) Machine learning device, image diagnosis assistance device, machine learning method, and image diagnosis assistance method
CN111160208B (en) Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model
ElSayed et al. Ambient and wearable sensing for gait classification in pervasive healthcare environments
CN116363263B (en) Image editing method, system, electronic device and storage medium
CN118942680A (en) Coronary heart disease auxiliary diagnosis system, system operation method and diagnosis device
US20200305846A1 (en) Method and system for reconstructing trachea model using ultrasonic and deep-learning techniques
US11880987B2 (en) Image processing apparatus, image processing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200807