CN111508057A - Trachea model reconstruction method and system by using computer vision and deep learning technology - Google Patents
Trachea model reconstruction method and system by using computer vision and deep learning technology Download PDFInfo
- Publication number
- CN111508057A CN111508057A CN201910334929.0A CN201910334929A CN111508057A CN 111508057 A CN111508057 A CN 111508057A CN 201910334929 A CN201910334929 A CN 201910334929A CN 111508057 A CN111508057 A CN 111508057A
- Authority
- CN
- China
- Prior art keywords
- image
- module
- trachea
- feature points
- image feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Epidemiology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Radiology & Medical Imaging (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a trachea model reconstruction method by utilizing computer vision and deep learning technology, which comprises a step of obtaining a trachea wall image, a step of loading image data, a step of processing images, a step of capturing image characteristics, a step of comparing the images, a step of estimating pose and converting space, and a step of reconstructing a three-dimensional trachea model; therefore, the trachea model reconstruction system capable of reconstructing and recording the three-dimensional trachea model accurately and quickly is provided.
Description
Technical Field
The invention relates to a trachea model reconstruction method and a trachea model reconstruction system by utilizing computer vision and deep learning technology, in particular to a trachea model reconstruction method and a trachea model reconstruction system which can accurately and quickly reconstruct and record a three-dimensional trachea model.
Background
When patients are under general anesthesia, cardiopulmonary resuscitation or cannot breathe by themselves during operation, intubation treatment is required to be carried out on the patients so as to insert the artificial airway into the trachea and ensure that medical gas is smoothly sent into the trachea of the patients. When the intubation treatment is carried out, medical staff cannot directly visually observe and adjust the artificial airway, and only can operate by relying on the touch feeling and past experience of the medical staff, so that the trachea of a patient is prevented from being stabbed, the operation is required to be carried out for multiple times, and the time for establishing the unobstructed airway is delayed. Therefore, the rapid and accurate establishment of the three-dimensional tracheal model for medical staff to assist intubation is a problem to be solved at present.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and provides a method and a system for reconstructing a trachea model using computer vision and deep learning techniques, which can accurately and rapidly reconstruct and record a three-dimensional trachea model.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a trachea model reconstruction method by using computer vision and deep learning technology comprises the following steps:
obtaining an image of the tracheal wall: the endoscope lens 70 is used to capture continuous images from the mouth to the trachea.
Loading the graph resources: loading and storing the continuous images shot and captured by the lens of the endoscope for subsequent processing;
image processing: denoising and denoising the shot and captured continuous images, and enhancing image details through image enhancement processing to obtain clear images;
image feature acquisition: capturing and screening the image through the characteristic capturing mode of the extreme value of the area of the clear image after the image processing step, and then storing the captured and screened image characteristic points;
image comparison: comparing the image characteristic points of the two continuous images which are connected after the image characteristic acquisition processing, finding out the common image characteristic points contained in the images and recording and storing the common image characteristic points;
pose estimation and spatial conversion: the common image feature points are subjected to auxiliary identification by deep learning, so that the position and the posture of the trachea in a three-dimensional space when the endoscope lens shoots the common image feature points are estimated, and the spatial information of the depth and the angle when the endoscope lens extends into the trachea for shooting is converted;
reconstructing a three-dimensional trachea model: projecting the common image characteristic points processed in the image comparison step to a three-dimensional space, and reconstructing and recording the spatial information of the shooting depth and angle of the endoscope lens obtained in the pose estimation and space conversion step into an actual three-dimensional trachea model;
by means of the method, the three-dimensional trachea model can be quickly and correctly reconstructed, and therefore, the method can assist personnel in intubation treatment.
Therefore, the trachea model reconstruction method can be used for accurately and quickly reconstructing and recording the three-dimensional trachea model for subsequent medical research or use.
The invention has the beneficial effect that the three-dimensional trachea model can be correctly and quickly reconstructed and recorded.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a flow chart of the steps of the present invention.
Fig. 2 is a system block diagram of the present invention.
FIG. 3 is a block diagram of a system incorporating an endoscope lens according to the present invention.
The reference numbers in the figures illustrate:
10 figure data loading module
20 image processing module
30 image feature capturing module
40 image comparison module
50 pose estimation algorithm module
60 three-dimensional model reconstruction module
70 endoscope lens
Detailed Description
The technical means and the structure thereof applied to achieve the object of the present invention will be described in detail with reference to the embodiments shown in fig. 1 to 3 as follows:
as shown in fig. 1, the trachea model reconstruction method using computer vision and deep learning technology in the embodiment includes the following steps:
obtaining an image of the tracheal wall: the endoscope lens 70 is used to capture continuous images from the mouth to the trachea.
Loading the graph resources: the continuous images captured by the endoscope lens 70 are loaded and stored for subsequent processing.
Image processing: the continuous images captured are denoised and denoised, and the image is enhanced to emphasize the image details to obtain clear images.
Image feature acquisition: the method comprises extracting and screening feature points of the images of the continuous images after image processing by feature extraction method (such as SIFT, SURF, ORB, etc.) of local extremum, and storing the extracted and screened image feature points.
Image comparison: and comparing the image characteristic points of the two continuous images which are connected after the image characteristic acquisition processing, finding out the common image characteristic points contained in the images, and recording and storing the common image characteristic points.
Pose estimation and spatial conversion: the common image feature points are recognized in an auxiliary manner by deep learning, so that the position and the posture of the trachea in the three-dimensional space reached when the endoscope lens 70 shoots the common image feature points are estimated, and the spatial information of the depth and the angle when the endoscope lens 70 extends into the trachea to shoot is converted.
Reconstructing a three-dimensional trachea model: and projecting the common image characteristic points processed in the image comparison step to a three-dimensional space, and reconstructing and recording the spatial information of the shooting depth and angle of the endoscope lens 70 obtained in the pose estimation and space conversion steps into an actual three-dimensional trachea model.
By means of the method, the three-dimensional trachea model can be quickly and correctly reconstructed, and therefore, the method can assist personnel in intubation treatment.
In order to achieve the above method, the model reconstruction system of the present invention, with reference to the embodiments shown in fig. 2 to 3, is described in detail as follows:
as shown in fig. 2, the trachea model reconstruction system using computer vision and deep learning technology of the present invention includes a map data loading module 10, an image processing module 20, an image feature capturing module 30, an image comparison module 40, a pose estimation algorithm module 50 and a three-dimensional model reconstruction module 60; wherein:
the image loading module 10 (please refer to fig. 3) is connected to the endoscope lens 70 for loading and storing the continuous images captured by the endoscope lens 70 from the oral cavity into the trachea for subsequent processing.
The image processing module 20 (please refer to fig. 3) is connected to the image loading module 10, and is configured to receive the continuous images loaded by the image loading module 10, perform denoising and denoising on the continuous images, and emphasize image details by using an image enhancement technique to obtain a clear image.
The image feature capturing module 30 (please refer to fig. 3) is connected to the image processing module 20, and is used for capturing and screening the feature points of the clear image processed by the image processing module 20 through the feature capturing manner of the local extremum, and storing the captured and screened image feature points.
In view of the above, the feature extraction method of the local extremum may be Scale-invariant feature transform (SIFT), Speeded Up Robust Features (SURF), fast feature point extraction and description (orinterfast and rotaed BRIEF, ORB), and the like.
The image comparison module 40 (please refer to fig. 3) is connected to the image feature capturing module 30 for receiving the image feature points captured and screened by the image feature capturing module 30, and comparing two consecutive images to find out the common image feature points and record and store the common image feature points.
The pose estimation algorithm module 50 (please refer to fig. 3) has a deep learning function, is connected to the image comparison module 40, and is configured to receive the common image feature points found by the image comparison module 40, and utilize the deep learning model to achieve auxiliary identification, so as to estimate the position and posture of the trachea reached by the endoscope lens 70 in the three-dimensional space when capturing the captured image according to the common feature points in the continuous images, thereby further converting the spatial information of the depth and angle of the endoscope lens 70 extending into the trachea to capture the image.
The three-dimensional model reconstructing module 60 (please refer to fig. 3) is connected to the image comparing module 40 and the pose estimation algorithm module 50, and is configured to receive the common image feature points found by the image comparing module 40 and the spatial information converted by the pose estimation algorithm module 50, so as to project all the image feature points to a three-dimensional space, and to reconstruct and record a complete three-dimensional trachea model by using the spatial information obtained by the pose estimation algorithm module 50.
In the pose estimation and spatial conversion step and the pose estimation and calculation module 50, the trachea image data of a plurality of patients are captured to obtain image feature points, and the image feature points and the captured images are input into a deep learning model, wherein the deep learning model can be selected from the types of supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning and the like (such as neural network, random forest, SVM (support vector machine), decision tree or cluster and the like), and the path trajectory of the depth, angle, path position and direction of the endoscope lens extending into the trachea is identified through the deep learning model, and the features and the shape of the trachea wall can be identified.
Therefore, after the continuous images are shot by the endoscope lens, the images are subjected to denoising, denoising and image detail strengthening treatment, the common characteristic points are captured and compared by the image characteristic points, the position and posture information of the continuous images is obtained by the position and posture estimation with the deep learning function, the depth and angle information of the endoscope lens extending into the trachea is further obtained, the moving track of the endoscope lens can be described, and the three-dimensional trachea model is correctly and quickly reconstructed by using a characteristic capture mode of computer vision and Visual ranging (Visual Odometry) to be formed for intubation assistance and subsequent medical research or use.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still fall within the scope of the technical solution of the present invention.
Claims (2)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW108201580U TWM584008U (en) | 2019-01-31 | 2019-01-31 | Trachea model reconstruction system utilizing computer vision and deep learning technology |
| TW108201580 | 2019-01-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111508057A true CN111508057A (en) | 2020-08-07 |
Family
ID=68620096
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910334929.0A Pending CN111508057A (en) | 2019-01-31 | 2019-04-24 | Trachea model reconstruction method and system by using computer vision and deep learning technology |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN111508057A (en) |
| TW (1) | TWM584008U (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115719446A (en) * | 2021-08-23 | 2023-02-28 | 财团法人车辆研究测试中心 | Feature point integration positioning system and feature point integration positioning method |
| WO2024239125A1 (en) | 2023-05-24 | 2024-11-28 | Centre Hospitalier Universitaire Vaudois | Apparatus and method for machine vision guided endotracheal intubation |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2011221988A (en) * | 2010-03-24 | 2011-11-04 | National Institute Of Advanced Industrial & Technology | Three-dimensional position posture measurement device by stereo image, method and program |
| CN103371870A (en) * | 2013-07-16 | 2013-10-30 | 深圳先进技术研究院 | Multimode image based surgical operation navigation system |
| CN103412401A (en) * | 2013-06-07 | 2013-11-27 | 中国科学院上海光学精密机械研究所 | Endoscope and pipeline wall three-dimensional image reconstruction method |
| CN108363387A (en) * | 2018-01-11 | 2018-08-03 | 驭势科技(北京)有限公司 | Sensor control method and device |
| CN108534782A (en) * | 2018-04-16 | 2018-09-14 | 电子科技大学 | A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system |
-
2019
- 2019-01-31 TW TW108201580U patent/TWM584008U/en not_active IP Right Cessation
- 2019-04-24 CN CN201910334929.0A patent/CN111508057A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2011221988A (en) * | 2010-03-24 | 2011-11-04 | National Institute Of Advanced Industrial & Technology | Three-dimensional position posture measurement device by stereo image, method and program |
| CN103412401A (en) * | 2013-06-07 | 2013-11-27 | 中国科学院上海光学精密机械研究所 | Endoscope and pipeline wall three-dimensional image reconstruction method |
| CN103371870A (en) * | 2013-07-16 | 2013-10-30 | 深圳先进技术研究院 | Multimode image based surgical operation navigation system |
| CN108363387A (en) * | 2018-01-11 | 2018-08-03 | 驭势科技(北京)有限公司 | Sensor control method and device |
| CN108534782A (en) * | 2018-04-16 | 2018-09-14 | 电子科技大学 | A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system |
Non-Patent Citations (3)
| Title |
|---|
| 彭小羽: "微创手术中内窥镜视觉SLAM方法研究", 《中国优秀博硕士学位论文全文数据库(电子期刊) 医药卫生科技辑》 * |
| 揭云飞 等: "视觉SLAM系统分析", 《电脑知识与技术》 * |
| 陈慧岩 等: "《无人驾驶车辆理论与设计》", 31 March 2018, 北京理工大学出版社 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115719446A (en) * | 2021-08-23 | 2023-02-28 | 财团法人车辆研究测试中心 | Feature point integration positioning system and feature point integration positioning method |
| WO2024239125A1 (en) | 2023-05-24 | 2024-11-28 | Centre Hospitalier Universitaire Vaudois | Apparatus and method for machine vision guided endotracheal intubation |
Also Published As
| Publication number | Publication date |
|---|---|
| TWM584008U (en) | 2019-09-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107239728B (en) | Unmanned aerial vehicle interaction device and method based on deep learning attitude estimation | |
| CN104077579B (en) | Facial expression recognition method based on expert system | |
| US20120238866A1 (en) | Method and System for Catheter Tracking in Fluoroscopic Images Using Adaptive Discriminant Learning and Measurement Fusion | |
| US20200305847A1 (en) | Method and system thereof for reconstructing trachea model using computer-vision and deep-learning techniques | |
| CN105869166B (en) | A kind of human motion recognition method and system based on binocular vision | |
| CN108305283A (en) | Human bodys' response method and device based on depth camera and basic form | |
| CN112861588B (en) | Living body detection method and device | |
| CN112883940A (en) | Silent in-vivo detection method, silent in-vivo detection device, computer equipment and storage medium | |
| CN110598580A (en) | Human face living body detection method | |
| CN110477907B (en) | Modeling method for intelligently assisting in recognizing epileptic seizures | |
| CN110580454A (en) | Liveness detection method and device | |
| CN111508057A (en) | Trachea model reconstruction method and system by using computer vision and deep learning technology | |
| JP4691570B2 (en) | Image processing apparatus and object estimation program | |
| CN107506713A (en) | Living body faces detection method and storage device | |
| Ling et al. | Virtual contour guided video object inpainting using posture mapping and retrieval | |
| CN111860057A (en) | Face image blurring and living body detection method, device, storage medium and device | |
| CN112992340A (en) | Disease early warning method, device, equipment and storage medium based on behavior recognition | |
| CN110430416B (en) | Free viewpoint image generation method and device | |
| WO2020012935A1 (en) | Machine learning device, image diagnosis assistance device, machine learning method, and image diagnosis assistance method | |
| CN111160208B (en) | Three-dimensional face super-resolution method based on multi-frame point cloud fusion and variable model | |
| ElSayed et al. | Ambient and wearable sensing for gait classification in pervasive healthcare environments | |
| CN116363263B (en) | Image editing method, system, electronic device and storage medium | |
| CN118942680A (en) | Coronary heart disease auxiliary diagnosis system, system operation method and diagnosis device | |
| US20200305846A1 (en) | Method and system for reconstructing trachea model using ultrasonic and deep-learning techniques | |
| US11880987B2 (en) | Image processing apparatus, image processing method, and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200807 |