[go: up one dir, main page]

WO2025053376A1 - Three-dimensional stomach navigation method and system for capsule endoscope - Google Patents

Three-dimensional stomach navigation method and system for capsule endoscope Download PDF

Info

Publication number
WO2025053376A1
WO2025053376A1 PCT/KR2024/007432 KR2024007432W WO2025053376A1 WO 2025053376 A1 WO2025053376 A1 WO 2025053376A1 KR 2024007432 W KR2024007432 W KR 2024007432W WO 2025053376 A1 WO2025053376 A1 WO 2025053376A1
Authority
WO
WIPO (PCT)
Prior art keywords
capsule endoscope
landmark
dimensional
data
movement path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/007432
Other languages
French (fr)
Korean (ko)
Inventor
박종오
김자영
홍석민
조병우
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Korea Institute of Medical Microrobotics
Original Assignee
Korea Institute of Medical Microrobotics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Korea Institute of Medical Microrobotics filed Critical Korea Institute of Medical Microrobotics
Publication of WO2025053376A1 publication Critical patent/WO2025053376A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000096Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00147Holding or positioning arrangements
    • A61B1/00158Holding or positioning arrangements using magnetic field
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • A61B1/2736Gastroscopes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Definitions

  • the present invention relates to a three-dimensional gastric navigation method and system for a capsule endoscope, and more particularly, to a three-dimensional gastric navigation method and system for a capsule endoscope that guides a three-dimensional path so that a capsule endoscope positioned inside a human body can explore the inside of the stomach.
  • Endoscopes generally refer to medical instruments used to examine the inside of the body for medical purposes. Depending on the area being examined, endoscopes are called bronchoscopes, gastroscopes, laparoscopes, colonoscopes, and nasal endoscopes. The most common existing gastrointestinal endoscopy methods are performed using a catheter in the form of a hose or tube with a camera attached, and a catheter with a 2D camera and light is inserted into the esophagus to observe the walls inside the stomach, etc.
  • This conventional gastroscopy method is diagnosed based on the image of the stomach interior displayed on the screen by the doctor directly manipulating the catheter.
  • the conventional gastroscopy method has the problem that if it is performed without sleep, it is accompanied by vomiting, abdominal distension, pain, and so on, causing the subject to feel distress during the examination process. In addition, if it is performed with sleep, it takes at least 30 minutes to 1 hour for the anesthesia to wear off due to the anesthetic, and there is the problem that an additional recovery period of at least 1 to 2 days is required.
  • sleep endoscopy carries additional risks of accidents, such as falling down the floor or stairs due to loss of strength after sleep anesthesia, or accidents while driving or operating equipment.
  • propofol a drug used in sleep endoscopy, has no known effective antidote at present, so there is a small chance of death due to failure to wake up.
  • Midazolim a drug with an antidote, has low sleep depth and analgesic effects, and has the problem of causing hallucinations in the subject.
  • Capsule endoscopes are devices that move along the digestive tract by the peristalsis of the digestive tract, photograph the inside of the human digestive tract, and transmit the photographed information to the outside via communication.
  • Medical capsule endoscopes have a small capsule shape, photograph the inside of a living body, and wirelessly transmit the photographed images of the inside of the living body to an external storage device.
  • the images of the inside of the living body captured by the endoscope are stored in the storage device.
  • the images of the inside of the living body stored in the storage device are displayed on a display device, etc. after a conversion process, and the reader observes the image data displayed on the display device to observe the organs inside the living body.
  • Korean Patent Publication No. 10-2020-0114844 discloses an electromagnetic field device for driving a micro robot inside the human body from outside the human body.
  • these devices can solve the problem of the existing capsule endoscope having no additional control from the outside, there is a limitation in manually or automatically controlling the capsule endoscope from the outside because the path the capsule endoscope moves cannot be searched in real time.
  • existing endoscopes have limitations in image information, such as low visibility because they are filmed with a 2D camera, distortion in the image because the image is obtained through a wide-angle lens with a wide angle of view inside a narrow area, differences in the clarity of the examination and diagnosis depending on the ability of the person operating the capsule endoscope, and inability to recheck specific parts after the examination. Due to these limitations in image information, it is difficult to find the path that the capsule endoscope will move.
  • Korean Patent Publication No. 10-2022-0064464 discloses a magnetic field-controlled pH sensor-assisted navigation capsule endoscope that controls the battery to uniformly acquire normal images from the large intestine.
  • the above-mentioned prior document is characterized in that, in order to efficiently use power, it is determined whether the location of the capsule endoscope is the large intestine through the image captured by the capsule endoscope and the sensed pH, and if it is the large intestine, the battery is turned on, and if it is not the large intestine, the battery is turned off.
  • the above prior literature aims to consume power efficiently, so it only needs to determine the approximate location of whether it is a colon or not, and since the criteria for determining the location include not only the photographed photograph but also the pH, there is a limitation in that an additional pH sensing device is required, which is disadvantageous in terms of production cost and miniaturization.
  • the above prior literature does not present a technical feature for searching in real time the path that a capsule endoscope positioned inside an organ will move for diagnosis or treatment, and still does not resolve the problem that it is difficult to search the path that a capsule endoscope will move due to the limitations of image information.
  • a three-dimensional navigation method and system for a capsule endoscope is required, which reconstructs images captured by the capsule endoscope to be advantageous for searching for the path along which it will move, and searches for and guides the path along which the capsule endoscope will move based on the reconstructed images.
  • the purpose of the present invention is to provide a three-dimensional stomach navigation method and system for a capsule endoscope that guides a three-dimensional path in real time so that a capsule endoscope positioned inside a human body can explore the inside of the stomach.
  • the present invention aims to restore capsule endoscope images captured in real time to 3D data so as to enable 3D navigation path search, and to visualize landmarks in the 3D data by aligning the restored 3D data with 3D landmark data.
  • the present invention aims to guide the capsule endoscope along a path to the next landmark location to enable the capsule endoscope to sequentially explore the inside of an organ.
  • the present invention is characterized by including a step of obtaining an image captured from a capsule endoscope inserted into a human body, a step of detecting a landmark from the image obtained from the capsule endoscope and determining position and pose information of the capsule endoscope, and a step of generating a movement path of the capsule endoscope based on at least one of the position information and pose information of the capsule endoscope, the landmark, or a three-dimensional upper gastrointestinal endoscope map.
  • the step of acquiring the captured image may include a step of aligning the three-dimensionally restored upper data with the generated three-dimensional landmark data to form three-dimensional upper data with the landmarks aligned.
  • the step of forming the three-dimensional upper data in which the landmarks are aligned may include moving and rotating the parasitic three-dimensional landmark data so that the Euclidean distance between the points constituting the parasitic three-dimensional landmark data and the three-dimensionally restored upper data is the closest, and overlapping the parasitic three-dimensional landmark data with the three-dimensionally restored upper data.
  • the step of detecting the landmark may include a step of inputting the captured image into an artificial intelligence trained with a two-dimensional landmark data set.
  • the method may include a step of extracting current position information and current pose information of the capsule endoscope from the identified position and pose information of the capsule endoscope when one of the multiple landmarks is detected.
  • the step of generating a movement path of the capsule endoscope may include, if any one of the landmarks is not the first landmark, a step of generating a path for moving to the first landmark based on current location information and current pose information of the capsule endoscope, and if any one of the landmarks is the first landmark, a step of generating a path for moving from the first landmark to a second landmark based on current location information and current pose information of the capsule endoscope.
  • the step of generating the movement path of the capsule endoscope can generate the movement path of the capsule endoscope so that the capsule endoscope moves sequentially according to the order of preset landmarks.
  • the landmark may include an area within the stomach of at least one of the Cardia, Fundus, Greater Curvature, Angulus, or Pylorus.
  • the present invention is characterized in that it includes an image receiving unit that acquires an image captured from a capsule endoscope inserted into a human body; a calculation unit that detects a landmark from the image acquired from the capsule endoscope and determines position and pose information of the capsule endoscope from three-dimensionally restored upper data; a search unit that generates a movement path of the capsule endoscope based on at least one of the position information and pose information of the capsule endoscope, the landmark, or a three-dimensional upper endoscope map; and a control unit that controls the capsule endoscope to move along the movement path generated by the search unit.
  • control unit can control the operation of the capsule endoscope using an external magnetic field.
  • the present invention has the advantage of enabling convenient and accurate diagnosis and examination by guiding the movement path of a capsule endoscope in real time so as to explore the inside of the stomach.
  • the present invention has the advantage of being able to confirm the current position of the capsule endoscope by constructing an image captured by the capsule endoscope into virtual 3D data and providing visually effective path guidance.
  • Figure 1 shows a flow chart of a three-dimensional upper navigation method of a capsule endoscope according to an embodiment of the present invention.
  • FIG. 2 illustrates an example of a three-dimensional navigation method of a capsule endoscope according to an embodiment of the present invention.
  • FIG. 3 illustrates a process in which parasitic 3D landmark data according to an embodiment of the present invention is aligned with the above data restored in 3D.
  • FIG. 4 is a drawing illustrating an example of determining the current position of a capsule endoscope based on major landmark information inside the stomach according to an embodiment of the present invention.
  • Figure 5 shows a path search process when the first landmark is detected according to an embodiment of the present invention.
  • FIG. 6 illustrates a capsule endoscope moving sequentially along the order of preset landmarks according to an embodiment of the present invention.
  • Figure 7 shows a configuration diagram of a three-dimensional navigation system, which is another embodiment of the present invention.
  • a three-dimensional upper navigation method of a capsule endoscope including a.
  • first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • FIG. 1 is a flowchart of a three-dimensional upper navigation method of a capsule endoscope according to an embodiment of the present invention.
  • the three-dimensional upper navigation method of a capsule endoscope may include a step of acquiring a photographed image (S101), a step of determining position and pose information of a capsule endoscope (S103), a step of generating a movement path of the capsule endoscope (S105), a step of controlling the capsule endoscope (S107), and a data storage step (S109).
  • each step of the three-dimensional navigation method of the capsule endoscope according to the present embodiment is sequentially performed, but it is not necessarily limited to this, and each step exemplified in FIG. 1 may be performed simultaneously or separately in the process of generating a movement path and driving the capsule endoscope (300) to move along the movement path.
  • a three-dimensional upper navigation method of a capsule endoscope can provide an environment in which an examiner or doctor can conveniently and accurately perform a diagnosis and examination by guiding a three-dimensional path in real time so that a capsule endoscope (300) positioned inside a human body can explore the inside of the stomach.
  • the three-dimensional upper navigation method of a capsule endoscope can restore a capsule endoscope (300) image captured in real time into three-dimensional data so that three-dimensional path exploration is possible, and the three-dimensionally restored stomach data and the generated three-dimensional landmark data are aligned so that landmarks can be visualized in the three-dimensionally restored stomach data, thereby providing visually effective path guidance.
  • the three-dimensional upper navigation method of the capsule endoscope can identify the current position of the capsule endoscope (300) and calculate pose information to provide a three-dimensional movement path on the three-dimensionally restored upper data.
  • the three-dimensional navigation path search method can sequentially search multiple landmarks inside the organ according to a set order.
  • FIG. 2 illustrates an example of a three-dimensional navigation method of a capsule endoscope according to an embodiment of the present invention.
  • the three-dimensional navigation method of the capsule endoscope can restore a capsule endoscope image captured in real time into three-dimensional stomach data based on a Visual SLAM algorithm (S201).
  • the three-dimensional navigation method of the capsule endoscope can detect a landmark in the captured capsule endoscope image.
  • the three-dimensional navigation method of the capsule endoscope can visualize the landmark by aligning the generated three-dimensional landmark data with the three-dimensionally restored stomach data (S203).
  • the three-dimensional navigation method of the capsule endoscope can grasp the position and pose information of the capsule endoscope (300) in real time during the process of restoring the three-dimensional stomach data, and when a landmark is detected, the current position and current pose information of the capsule endoscope (300) can be extracted (S205). At this time, the extracted current location and current pose information can be used as a basis for generating a movement path.
  • the three-dimensional upper navigation method of the capsule endoscope can generate a movement path based on the current landmark when a landmark is first detected (S207).
  • the three-dimensional upper navigation method of the capsule endoscope can guide a movement path to move to the location of the second landmark, which is the next search order, if the current landmark is the first landmark (S209).
  • the three-dimensional upper navigation method of the capsule endoscope can guide a movement path to move to the location of the first landmark if the current landmark is not the first landmark (S211).
  • the three-dimensional upper navigation method of the capsule endoscope can end the gastrointestinal endoscope search when the capsule endoscope (300) searches landmarks in a preset order and arrives at the last landmark (S213).
  • the step of acquiring a captured image can acquire an image captured from a capsule endoscope inserted into the human body.
  • the step (S101) of acquiring the captured image can restore the image captured by the capsule endoscope (300) into 3D data in real time.
  • the step (S101) of acquiring the captured image can prevent problems such as reduced visibility and image distortion of the capsule endoscope (300) image by guiding the movement path using the 3D restored data.
  • the step (S101) of acquiring a photographed image can be a preliminary task of three-dimensional stomach data restoration by converting an image captured by a capsule endoscope (300) into a virtually stained image.
  • the step (S101) of acquiring a photographed image can be a step of separating images captured by a capsule endoscope (300) into R, G, and B channels from consecutive frames and then converting them into virtually stained images.
  • the step (S101) of acquiring a photographed image can use a deep learning model trained with a data set consisting of an endoscopic image in which an organ is not stained and an endoscopic image in which an organ is stained.
  • the present embodiment describes that the preliminary task of three-dimensional stomach data restoration is performed in the step (S101) of acquiring a photographed image, but is not limited thereto.
  • the step (S101) of acquiring the captured image can improve the performance of the subsequent 3D stomach data restoration by converting the image captured by the capsule endoscope (300) into a virtually dyed image. That is, when the 3D stomach data is restored using the virtually dyed image, more feature points can be extracted compared to the undyed image, thereby restoring the 3D stomach data similar to the actual inside of the stomach.
  • the above data restored in 3D can be formed in real time by extracting feature points of the current frame and the previous frame from a captured image or a virtually dyed image, and then matching the extracted feature points.
  • the SfM (Structure from Motion) algorithm was used for feature point extraction and feature point matching.
  • the SfM algorithm recovers three-dimensional data by detecting common feature points and finding correlations based on images acquired for one object at multiple points in time.
  • the SfM algorithm is not suitable for the characteristics of the present invention, which recovers images captured in real time by a capsule endoscope (300) into three-dimensional data.
  • the step (S101) of acquiring a photographed image uses a method of restoring an image photographed and transmitted in real time into three-dimensional data.
  • the step (S101) of acquiring a photographed image can restore an image photographed in real time into three-dimensional data using a Visual SLAM algorithm.
  • the Visual SLAM algorithm is advantageous in constructing real-time 3D data because it requires less computation than the SfM algorithm.
  • FIG. 3 illustrates a process in which parasitic 3D landmark data according to an embodiment of the present invention is registered to 3D restored upper data.
  • the step (S101) of acquiring a photographed image may include a step of registering the 3D restored upper data with the parasitic 3D landmark data to form 3D upper data with the landmarks registered.
  • image registration refers to a processing technique of transforming different images and representing them in a single coordinate system.
  • the same landmark as the landmark detected from the parasitic 3D landmark data can be extracted and aligned to the 3D restored upper data.
  • the parasitic 3D landmark data can be moved and rotated so that the Euclidean distance between the points constituting the parasitic 3D landmark data and the 3D restored upper data is the closest, and the parasitic 3D landmark data can be overlapped with the 3D restored upper data.
  • the step of forming 3D data with landmark alignment can perform data alignment using the ICP (Iterative Closest Point) algorithm.
  • the ICP algorithm is an algorithm that calculates the distance between corresponding points when two point clouds are given and finds a transformation matrix that minimizes this distance.
  • the step of forming landmark-aligned 3D data can visualize the landmark-aligned 3D data so that the landmark portion is distinguished from other portions.
  • the step of forming 3D landmark-aligned stomach data aligns 3D landmark data to virtual 3D data and visualizes the landmarks, thereby enabling the location of representative landmarks within the 3D-constructed stomach to be identified, and providing a visually effective 3D navigation function.
  • the step of forming 3D stomach data with aligned landmarks can generate 3D landmark data as a preliminary task of landmark detection.
  • the step of forming 3D stomach data with aligned landmarks can utilize a deep learning model trained with a data set composed of images of several frames corresponding to representative landmark areas inside the stomach photographed with a capsule endoscope (300).
  • the detection step (S103) can generate 3D landmark data by inputting an image corresponding to a representative landmark area inside the stomach into the trained deep learning model.
  • the step of forming landmark-aligned 3D stomach data is a preliminary task of generating a movement path performed in the step (S105) of generating a movement path of a capsule endoscope to be described later, and can generate a 3D gastroscopy map that serves as a landmark of the movement path.
  • the step of forming landmark-aligned 3D stomach data can convert a general or universal human capsule endoscope image into 3D stomach data, and generate a 3D gastroscopy map by aligning the generated 3D landmark data to the converted 3D stomach data.
  • the present embodiment describes that the preliminary task of landmark detection and movement path generation is performed in the step of forming landmark-aligned 3D stomach data, but is not limited thereto.
  • FIG. 4 shows an example of identifying the current position of a capsule endoscope based on major landmark information inside the stomach according to an embodiment of the present invention.
  • the step (S103) of identifying the position and pose information of the capsule endoscope can detect a landmark from an image acquired from the capsule endoscope (300).
  • a landmark means an area that can identify a specific position or a specific part inside the stomach.
  • a representative landmark of the stomach can include at least one area inside the stomach among the cardia, fundus, greater curvature, angulus, and pylorus.
  • FIG. 4 Referring to the left picture of Fig. 4, four landmarks are set inside the stomach, consisting of Cardia & Fundus (Ex. 1), Greater Curvature (Ex. 2), Angulus (Ex. 3), or Pylorus (Ex. 4). Referring to the right picture of Fig. 4, it can be confirmed that the landmarks on the 3D stomach data with aligned landmarks have better visibility than the landmarks on the 3D restored stomach data.
  • the step (S103) of identifying the position and pose information of the capsule endoscope may include a step of inputting the captured image into an artificial intelligence learned with a two-dimensional landmark data set.
  • landmark detection can be performed in real time.
  • the artificial intelligence used may be a classification algorithm.
  • the step (S103) of identifying the position and pose information of the capsule endoscope can identify the position and pose information of the capsule endoscope (300).
  • the position means the approximate position of the capsule endoscope (300) inside the stomach
  • the pose information means the position, angle (rotation angle), or movement direction of the camera built into the capsule endoscope (300).
  • the step (S103) of identifying the position and pose information of the capsule endoscope can identify the pose information based on the degree of movement and degree of rotation of the capsule endoscope extracted by matching the extracted feature points after extracting the feature points of the current frame and the previous frame of the captured image.
  • the identified position and pose information can be used for generating a movement path, etc.
  • the step of extracting the current position information and current pose information of the capsule endoscope can extract the current position information and current pose information of the capsule endoscope from the identified position and pose information of the capsule endoscope when any one landmark among multiple landmarks is detected.
  • the step of generating a movement path of the capsule endoscope can generate the movement path of the capsule endoscope based on at least one of position information and pose information of the capsule endoscope, a landmark, or a 3D upper gastrointestinal endoscope map.
  • the step (S105) of generating a movement path of the capsule endoscope can generate a movement path along which the capsule endoscope (300) can explore the periphery of the detected landmark when a landmark is detected.
  • the step (S105) of generating a movement path of the capsule endoscope can generate a movement path along which the capsule endoscope (300) moves to the location of the next landmark to be explored when a landmark is detected.
  • the step (S105) of generating a movement path of the capsule endoscope can generate a movement path along which the capsule endoscope (300) explores the periphery of the detected landmark when a landmark is detected and then moves to the location of the next landmark to be explored.
  • the step (S105) of generating a movement path of the capsule endoscope can generate a movement path of the capsule endoscope so that the capsule endoscope moves sequentially according to the order of preset landmarks.
  • the step (S105) of generating a movement path of the capsule endoscope can generate a movement path so that the capsule endoscope (300) searches around the first landmark and then moves to the location of the second landmark.
  • the step (S105) of generating a movement path of the capsule endoscope can generate a movement path so that, when the capsule endoscope (300) reaches the location of the second landmark, it searches around the second landmark and then moves to the location of the third landmark.
  • the step (S105) of generating a movement path of the capsule endoscope can end the search when the capsule endoscope (300) reaches the location of the third landmark.
  • the generated movement path can be provided three-dimensionally so that the capsule endoscope (300) can be controlled in the internal space of the stomach.
  • the movement path does not mean a path of movement on a two-dimensional plane, but can mean a path of movement in space three-dimensionally.
  • the step (S105) of generating a movement path of the capsule endoscope can generate a movement path that takes into account the internal position, angle, and movement direction of the capsule endoscope (300) so that the capsule endoscope (300) can move in the internal space of the stomach.
  • the step (S105) of generating a movement path of the capsule endoscope can provide a movement path that serves as a guide when an examiner or doctor operates the capsule endoscope (300) from the outside.
  • the step (S105) of generating a movement path of the capsule endoscope can provide a movement path to a control device if there is a separate control device that automatically controls the capsule endoscope (300).
  • the step (S105) of generating a movement path of the capsule endoscope may include a step of generating a path for moving to the first landmark based on current location information and current pose information of the capsule endoscope if any landmark is not the first landmark, and a step of generating a path for moving from the first landmark to a second landmark based on current location information and current pose information of the capsule endoscope if any landmark is the first landmark.
  • FIG. 5 illustrates a path search process when a landmark is first detected according to an embodiment of the present invention.
  • a step (S105) of generating a movement path of the capsule endoscope can determine whether the detected landmark is a first landmark, which is a starting point of the search (S303).
  • a step (S105) of generating a movement path of the capsule endoscope can generate a movement path to the location of the first landmark if the first detected landmark is not the first landmark, which is a starting point of the path search (S305).
  • the step of generating a movement path of a capsule endoscope (S105) can generate a path to move to a location of a second landmark, which is a point following the first landmark, if the landmark is a first landmark, which is a starting point of the path search (S307).
  • the step of generating a movement path of a capsule endoscope (S105) can generate a path to move to a location of a second landmark, which is a point following the first landmark, after generating a path to move to a location of the second landmark (S309).
  • the step of generating a movement path of a capsule endoscope (S105) can end path guidance when the last landmark is reached (S311).
  • the step (S105) of generating a movement path of a capsule endoscope uses the position and pose information of the capsule endoscope (300) extracted when generating the movement path, but when generating a movement path to move to an internal location of the stomach that has not been restored from the three-dimensionally restored stomach data, the movement path can be generated based on a three-dimensional gastrointestinal endoscope map.
  • the step (S101) of acquiring a photographed image can restore the image photographed by the capsule endoscope (300) into 3D stomach data in real time, so that when the capsule endoscope (300) reaches the first landmark, 3D stomach data for generating a movement path to move to another landmark may not be constructed.
  • the step (S105) of generating the movement path of the capsule endoscope can generate a movement path to move to the next landmark based on a 3D gastrointestinal endoscope map constructed in advance. Since the shape of the inside of the stomach of most people and the location and shape of landmarks are almost similar, the 3D gastrointestinal endoscope map can be utilized.
  • a movement path is generated using a three-dimensional upper endoscope map. Then, when the three-dimensional upper data for the movement path is restored, the movement path can be corrected based on the three-dimensionally restored upper data.
  • the step (S107) of controlling the capsule endoscope can control the capsule endoscope (300) to move along the movement path generated in the step (S105) of generating the movement path of the capsule endoscope.
  • the control step (S107) can control the capsule endoscope (300) by changing the magnetic field of a control device that controls the operation of the capsule endoscope (300) using an external magnetic field.
  • the storage step can store various results generated in performing the present invention, such as 3D stomach data restored in real time, detected landmark images, data in which landmarks and 3D restored stomach data are aligned.
  • the existing gastrointestinal endoscope has a problem in that a specific part cannot be checked again after examination, but the storage step can solve the problem that there may be parts that are missed due to not being able to check depending on the doctor's individual ability, since the images captured by the capsule endoscope (300) and the reconstructed images are stored.
  • Fig. 7 shows a configuration diagram of a three-dimensional navigation system (100) according to another embodiment of the present invention.
  • the three-dimensional navigation system (100) may include an image receiving unit (110), a calculation unit (130), a search unit (150), and a control unit (170).
  • the image receiving unit (110) can obtain an image captured from a capsule endoscope inserted into the human body.
  • the image receiving unit (110) can receive an image captured from the capsule endoscope (300) via wireless communication.
  • the capsule endoscope (300) can capture several to several tens of still images or moving images per second and transmit data to the image receiving unit (110) in real time.
  • the image receiving unit (110) can perform the operation of the step (S101) of acquiring the captured image described above.
  • the calculation unit (130) can detect landmarks from images acquired from a capsule endoscope and determine position and pose information of the capsule endoscope from the three-dimensionally restored data.
  • the calculation unit (130) can perform the operation of the step (S103) of determining position and pose information of the capsule endoscope as described above.
  • the search unit (150) can detect landmarks from images acquired from a capsule endoscope and determine the position and pose information of the capsule endoscope from the three-dimensionally restored data.
  • the search unit (150) can perform the operation of the step of generating the movement path of the capsule endoscope as described above.
  • the control unit (170) can control the capsule endoscope to move along the movement path generated by the search unit.
  • the control unit (170) can perform the operation of the step (S107) of controlling the capsule endoscope described above.
  • the storage unit (not shown) can store various results generated in performing the present invention, such as 3D stomach data restored in real time, detected landmark images, and data in which landmarks and 3D restored stomach data are aligned.
  • the storage unit (not shown) can perform the operation of the storage step described above.
  • the purpose of the present invention is to provide a three-dimensional stomach navigation method and system for a capsule endoscope that guides a three-dimensional path in real time so that a capsule endoscope positioned inside a human body can explore the inside of the stomach.
  • the present invention aims to restore capsule endoscope images captured in real time to 3D data so as to enable 3D navigation path search, and to visualize landmarks in the 3D data by aligning the restored 3D data with 3D landmark data.
  • the present invention aims to guide the capsule endoscope along a path to the next landmark location to enable the capsule endoscope to sequentially explore the inside of an organ.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Endoscopes (AREA)

Abstract

In order to accomplish the objective, the present invention comprises the steps of: obtaining a captured image from a capsule endoscope inserted into a human body; detecting a landmark from the image obtained from the capsule endoscope, and identifying position and pose information of the capsule endoscope; and generating a moving path of the capsule endoscope on the basis of at least one from among the position information and pose information of the capsule endoscope, the landmark, and a three-dimensional gastroscopy map.

Description

캡슐내시경의 3차원 위 내비게이션 방법 및 시스템3D upper navigation method and system for capsule endoscopy

본 발명은 과학기술정보통신부, 산업통상자원부, 보건복지부, 및 식품의약품안전처의 지원 하에서 과제고유번호 RS-2021-KD000001, 과제번호 1415184155에 의해 이루어진 것으로서, 상기 과제의 연구관리전문기관은 (재)범부처전주기의료기기연구개발사업단, 연구사업명은 "범부처전주기의료기기연구개발사업", 연구과제명은 "무릎연골재생을 위한 마이크로전달체 기반 치료물질 능동정밀전달 의료기기 개발", 주관기관은 (재)한국마이크로의료로봇연구원, 연구기간은 2023.01.01. ~ 2023.12.31.이다.The present invention was made with the support of the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, and the Ministry of Food and Drug Safety, under the project identification number RS-2021-KD000001, project number 1415184155. The research management specialized institution of the project is the Inter-Ministry Full-Cycle Medical Device Research and Development Project Group, the research project name is "Inter-Ministry Full-Cycle Medical Device Research and Development Project", the research project name is "Development of a medical device for active precision delivery of therapeutic substances based on microcarriers for knee cartilage regeneration", the main institution is the Korea Micro Medical Robot Research Institute, and the research period is from January 1, 2023 to December 31, 2023.

본 발명은 캡슐내시경의 3차원 위 내비게이션 방법 및 시스템에 관한 것으로서, 특히 인체 내 위치한 캡슐내시경이 위 내부를 탐색할 수 있도록 3차원 경로를 안내해주는 캡슐내시경의 3차원 위 내비게이션 방법 및 시스템에 관한 것이다.The present invention relates to a three-dimensional gastric navigation method and system for a capsule endoscope, and more particularly, to a three-dimensional gastric navigation method and system for a capsule endoscope that guides a three-dimensional path so that a capsule endoscope positioned inside a human body can explore the inside of the stomach.

내시경은 일반적으로 의료 목적으로 신체의 내부를 살펴보기 위한 의료 기구를 의미한다. 내시경은 검사하는 부위에 따라 기관지경, 위내시경, 복강경, 대장내시경. 경비내시경 등으로 지칭된다. 기존에 존재하는 대표적인 위내시경 방법은 카메라가 달린 호스 또는 관 형태의 카테터로 진행되며, 2D 카메라와 조명이 달린 카테터가 식도로 삽입되어 위 내부의 벽 등을 관측한다. Endoscopes generally refer to medical instruments used to examine the inside of the body for medical purposes. Depending on the area being examined, endoscopes are called bronchoscopes, gastroscopes, laparoscopes, colonoscopes, and nasal endoscopes. The most common existing gastrointestinal endoscopy methods are performed using a catheter in the form of a hose or tube with a camera attached, and a catheter with a 2D camera and light is inserted into the esophagus to observe the walls inside the stomach, etc.

이러한 종래의 위내시경 방법은 의사가 카테터를 직접적으로 조작하여 스크린으로 보여지는 위 내부 영상을 기반으로 피검사자를 진단한다. 종래 위내시경 방법은 비수면으로 진행될 경우 구토와 복부 팽만, 통각 및 고통 등이 수반되어 피검사자가 검사과정 간에 괴로움을 느낀다는 문제점이 있으며, 수면으로 진행될 경우 마취제로 인해 수면마취가 풀리는데 최소 30분에서 1시간 이상이 소요되며 추가적인 회복 기간이 최소 1 내지 2일 정도가 소요된다는 문제점이 있다. This conventional gastroscopy method is diagnosed based on the image of the stomach interior displayed on the screen by the doctor directly manipulating the catheter. The conventional gastroscopy method has the problem that if it is performed without sleep, it is accompanied by vomiting, abdominal distension, pain, and so on, causing the subject to feel distress during the examination process. In addition, if it is performed with sleep, it takes at least 30 minutes to 1 hour for the anesthesia to wear off due to the anesthetic, and there is the problem that an additional recovery period of at least 1 to 2 days is required.

특히 수면 내시경은 수면마취 이후 힘이 빠져 바닥이나 계단에서 넘어지는 낙상사고 또는 운전이나 기기 조작을 하다가 발생하는 사고 등의 추가적인 사고 위험이 존재한다. 또한, 수면 내시경에 사용되는 약물인 프로포폴은 현재까지 알려진 유효한 해독제가 없어 희박한 확률로 깨어나지 못해 사망에 이르게 하는 경우도 있다. 해독제가 존재하는 약물인 미다졸림은 수면의 깊이와 진통 억제 효과가 낮으며, 피검사자가 환각을 겪게 하는 문제점이 있다. In particular, sleep endoscopy carries additional risks of accidents, such as falling down the floor or stairs due to loss of strength after sleep anesthesia, or accidents while driving or operating equipment. In addition, propofol, a drug used in sleep endoscopy, has no known effective antidote at present, so there is a small chance of death due to failure to wake up. Midazolim, a drug with an antidote, has low sleep depth and analgesic effects, and has the problem of causing hallucinations in the subject.

종래의 위내시경의 단점을 보완하기 위해 캡슐내시경이 개발되었다. 캡슐내시경은 소화기관의 연동운동에 의해 소화기관을 따라 이동하여 인체 소화기관 내부를 촬영하고 촬영된 정보를 통신을 통해 외부로 전달할 수 있는 장치이다. 의료용 캡슐 내시경은 작은 캡슐 형태를 갖고, 생체 내부를 촬영하고 촬영된 생체 내부 이미지를 무선으로 외부의 저장 장치로 전송한다. 내시경으로부터의 촬영된 생체 내부 이미지는 저장 장치에 저장된다. 저장 장치에 저장된 생체 내부 이미지는 변환 과정을 거쳐 디스플레이 장치 등에 표시되고, 판독자는 디스플레이 장치에 표시된 영상 데이터를 관찰하여 생체 내부의 장기를 관찰하게 된다.To make up for the shortcomings of conventional gastrointestinal endoscopes, capsule endoscopes have been developed. Capsule endoscopes are devices that move along the digestive tract by the peristalsis of the digestive tract, photograph the inside of the human digestive tract, and transmit the photographed information to the outside via communication. Medical capsule endoscopes have a small capsule shape, photograph the inside of a living body, and wirelessly transmit the photographed images of the inside of the living body to an external storage device. The images of the inside of the living body captured by the endoscope are stored in the storage device. The images of the inside of the living body stored in the storage device are displayed on a display device, etc. after a conversion process, and the reader observes the image data displayed on the display device to observe the organs inside the living body.

이러한 기존의 캡슐내시경은 위장관 전체를 확인하기 어렵고, 현재 위치를 실시간으로 확인하기 어려우며, 위장관 연동운동에 의해 움직이기 때문에 외부에서 추가적인 제어가 불가능하다. These existing capsule endoscopes have difficulty viewing the entire gastrointestinal tract, are difficult to confirm the current location in real time, and are moved by gastrointestinal peristalsis, making additional control from the outside impossible.

이와 관련하여, 한국공개특허 제10-2020-0114844호는 인체 내 마이크로 로봇을 인체 외부에서 구동하기 위한 전자기장 장치를 개시하고 있다. 이러한 장치들로 인해 기존의 캡슐내시경이 가지고 있는 외부에서 추가적인 제어가 불가능하다는 문제점을 해결할 수 있으나, 캡슐내시경이 이동하는 경로를 실시간으로 탐색할 수 없어 외부에서 캡슐내시경을 수동 또는 자동으로 제어하는 데 한계점이 존재한다. In this regard, Korean Patent Publication No. 10-2020-0114844 discloses an electromagnetic field device for driving a micro robot inside the human body from outside the human body. Although these devices can solve the problem of the existing capsule endoscope having no additional control from the outside, there is a limitation in manually or automatically controlling the capsule endoscope from the outside because the path the capsule endoscope moves cannot be searched in real time.

또한, 기존의 내시경은 2D 카메라로 촬영되므로 시인성이 낮은 점, 좁은 영역 내부를 넓은 화각을 갖는 광각렌즈를 통해 영상을 확보하므로 영상에 왜곡이 존재하는 점, 캡슐내시경 조작하는 사람의 능력에 따라 검사 및 진단의 명확성에 차이가 발생하는 점, 및 검사 후 특정 부분을 다시 확인해 볼 수 없다는 점 등의 영상정보의 한계점이 존재한다. 이러한 영상정보의 한계점에 의해 캡슐내시경이 이동할 경로를 탐색하는데 어려움이 존재한다. In addition, existing endoscopes have limitations in image information, such as low visibility because they are filmed with a 2D camera, distortion in the image because the image is obtained through a wide-angle lens with a wide angle of view inside a narrow area, differences in the clarity of the examination and diagnosis depending on the ability of the person operating the capsule endoscope, and inability to recheck specific parts after the examination. Due to these limitations in image information, it is difficult to find the path that the capsule endoscope will move.

이와 관련하여, 한국공개특허 제10-2022-0064464호는 대장에서부터 정상적인 영상을 균일하게 획득하도록 배터리를 제어하는 자기장 조종가능한 pH 센서 보조 내비게이션 캡슐내시경을 개시하고 있다. 상기 선행문헌은 전력을 효율적으로 사용하기 위해 캡슐내시경에서 촬영한 영상과 센싱한 pH를 통해 캡슐내시경의 위치가 대장인지를 판단하여 대장인 경우 배터리를 온하고 대장이 아닌 경우 배터리를 오프하는 것을 특징으로 한다. In this regard, Korean Patent Publication No. 10-2022-0064464 discloses a magnetic field-controlled pH sensor-assisted navigation capsule endoscope that controls the battery to uniformly acquire normal images from the large intestine. The above-mentioned prior document is characterized in that, in order to efficiently use power, it is determined whether the location of the capsule endoscope is the large intestine through the image captured by the capsule endoscope and the sensed pH, and if it is the large intestine, the battery is turned on, and if it is not the large intestine, the battery is turned off.

상기 선행문헌은 전력을 효율적으로 소모하는 것을 목적으로 하므로 대장인지 아닌지 정도의 대략적인 위치만 판단하면 되고, 위치 판단의 기준이 촬영된 사진뿐만 아니라 pH로 포함되므로 추가적인 pH 센싱 장치가 필요하여 생산 단가 및 소형화에 불리하다는 한계점이 있다. The above prior literature aims to consume power efficiently, so it only needs to determine the approximate location of whether it is a colon or not, and since the criteria for determining the location include not only the photographed photograph but also the pH, there is a limitation in that an additional pH sensing device is required, which is disadvantageous in terms of production cost and miniaturization.

즉, 상기 선행문헌은 진단 또는 치료를 위해 장기 내에 위치한 캡슐내시경이 이동할 경로를 실시간으로 탐색하는 기술적 특징을 제시하지 못하고 있으며, 영상정보의 한계점에 의해 캡슐내시경이 이동할 경로를 탐색하는데 어려움이 존재한다는 문제점을 여전히 해결하지 못하고 있다. That is, the above prior literature does not present a technical feature for searching in real time the path that a capsule endoscope positioned inside an organ will move for diagnosis or treatment, and still does not resolve the problem that it is difficult to search the path that a capsule endoscope will move due to the limitations of image information.

따라서, 캡슐내시경이 이동할 경로를 탐색하기 위해, 캡슐내시경이 촬영한 영상을 이동 경로 탐색에 유리하도록 재구성하고, 재구성한 영상을 토대로 캡슐내시경의 이동 경로를 탐색하여 안내하는 캡슐내시경의 3차원 위 내비게이션 방법 및 시스템이 요구되고 있는 실정이다.Accordingly, in order to search for the path along which the capsule endoscope will move, a three-dimensional navigation method and system for a capsule endoscope is required, which reconstructs images captured by the capsule endoscope to be advantageous for searching for the path along which it will move, and searches for and guides the path along which the capsule endoscope will move based on the reconstructed images.

본 발명은 인체 내 위치한 캡슐내시경이 위 내부를 탐색할 수 있도록 실시간으로 3차원 경로를 안내하는 캡슐내시경의 3차원 위 내비게이션 방법 및 시스템을 제공하는 것을 일 목적으로 한다. The purpose of the present invention is to provide a three-dimensional stomach navigation method and system for a capsule endoscope that guides a three-dimensional path in real time so that a capsule endoscope positioned inside a human body can explore the inside of the stomach.

또한, 본 발명은 3차원 내비게이션 경로 탐색이 가능하도록 실시간으로 촬영되는 캡슐내시경 영상을 3D 데이터로 복원하고, 복원된 3D 데이터와 3차원 랜드마크 데이터를 정합하여 상기 3D 데이터에서 랜드마크를 시각화하고자 한다. In addition, the present invention aims to restore capsule endoscope images captured in real time to 3D data so as to enable 3D navigation path search, and to visualize landmarks in the 3D data by aligning the restored 3D data with 3D landmark data.

또한, 본 발명은 캡슐내시경이 이동할 다음 랜드마크 위치까지의 경로를 안내하여 캡슐 내시경이 장기 내부를 순차적으로 탐색하게 하고자 한다.In addition, the present invention aims to guide the capsule endoscope along a path to the next landmark location to enable the capsule endoscope to sequentially explore the inside of an organ.

상기 목적을 달성하기 위하여 본 발명은, 인체 내부에 삽입된 캡슐내시경으로부터 촬영된 영상을 획득하는 단계, 상기 캡슐내시경에서 획득된 영상으로부터 랜드마크를 검출하고, 상기 캡슐내시경의 위치 및 포즈 정보를 파악하는 단계, 그리고 상기 캡슐내시경의 위치 정보 및 포즈 정보, 상기 랜드마크, 또는 3차원 위 내시경 지도 중 적어도 하나를 기초로 캡슐내시경의 이동 경로를 생성하는 단계를 포함하는 것을 일 특징으로 한다.In order to achieve the above object, the present invention is characterized by including a step of obtaining an image captured from a capsule endoscope inserted into a human body, a step of detecting a landmark from the image obtained from the capsule endoscope and determining position and pose information of the capsule endoscope, and a step of generating a movement path of the capsule endoscope based on at least one of the position information and pose information of the capsule endoscope, the landmark, or a three-dimensional upper gastrointestinal endoscope map.

바람직하게는, 상기 촬영된 영상을 획득하는 단계는, 3차원으로 복원된 위 데이터를 기생성된 3차원 랜드마크 데이터와 정합시켜 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계를 포함할 수 있다. Preferably, the step of acquiring the captured image may include a step of aligning the three-dimensionally restored upper data with the generated three-dimensional landmark data to form three-dimensional upper data with the landmarks aligned.

바람직하게는, 상기 3차원으로 복원된 위 데이터는, 상기 촬영된 영상의 현재 프레임과 이전 프레임의 특징점을 추출한 후, 추출된 특징점을 매칭하여 실시간으로 형성될 수 있다. Preferably, the three-dimensionally restored data can be formed in real time by extracting feature points of the current frame and the previous frame of the captured image and then matching the extracted feature points.

바람직하게는, 상기 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는, 상기 기생성된 3차원 랜드마크 데이터와 상기 3차원으로 복원된 위 데이터를 구성하는 점들 사이의 유클리드 거리가 가장 가깝도록 상기 기생성된 3차원 랜드마크 데이터를 이동 및 회전시켜 상기 3차원으로 복원된 위 데이터에 오버랩(overlap)시킬 수 있다. Preferably, the step of forming the three-dimensional upper data in which the landmarks are aligned may include moving and rotating the parasitic three-dimensional landmark data so that the Euclidean distance between the points constituting the parasitic three-dimensional landmark data and the three-dimensionally restored upper data is the closest, and overlapping the parasitic three-dimensional landmark data with the three-dimensionally restored upper data.

바람직하게는, 상기 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는, 상기 랜드마크가 정합된 3차원 위 데이터에서 정합된 랜드마크 부분을 다른 부분과 구별되도록 시각화 할 수 있다.Preferably, the step of forming the three-dimensional data with the landmarks aligned can visualize the aligned landmark portion in the three-dimensional data with the landmarks aligned so as to be distinguished from other portions.

바람직하게는, 상기 캡슐내시경의 위치 및 포즈 정보를 계산하는 단계는, 상기 촬영된 영상의 현재 프레임과 이전 프레임의 특징점을 추출한 후, 추출된 특징점을 매칭을 통해 추출한 상기 캡슐내시경의 이동 정도와 회전 정도를 기초로 포즈 정보를 파악할 수 있다. Preferably, the step of calculating the position and pose information of the capsule endoscope can extract feature points of the current frame and the previous frame of the captured image, and then determine the pose information based on the degree of movement and rotation of the capsule endoscope extracted through matching the extracted feature points.

바람직하게는, 상기 랜드마크를 검출하는 단계는, 2차원 랜드마크 데이터 셋으로 학습된 인공지능에 상기 촬영된 영상을 입력하는 단계를 포함할 수 있다. Preferably, the step of detecting the landmark may include a step of inputting the captured image into an artificial intelligence trained with a two-dimensional landmark data set.

바람직하게는, 여러 개의 랜드마크 중 어느 하나의 랜드마크를 검출한 경우, 파악된 상기 캡슐내시경의 위치 및 포즈 정보로부터 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 추출하는 단계를 포함할 수 있다. Preferably, the method may include a step of extracting current position information and current pose information of the capsule endoscope from the identified position and pose information of the capsule endoscope when one of the multiple landmarks is detected.

바람직하게는, 상기 캡슐내시경의 이동 경로를 생성하는 단계는, 상기 어느 하나의 랜드마크가 제1 랜드마크가 아닌 경우, 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 기초로 상기 제1 랜드마크로 이동하는 경로를 생성하는 단계, 그리고 상기 어느 하나의 랜드마크가 제1 랜드마크인 경우, 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 기초로 상기 제1 랜드마크에서 제2 랜드마크로 이동하는 경로를 생성하는 단계를 포함할 수 있다. Preferably, the step of generating a movement path of the capsule endoscope may include, if any one of the landmarks is not the first landmark, a step of generating a path for moving to the first landmark based on current location information and current pose information of the capsule endoscope, and if any one of the landmarks is the first landmark, a step of generating a path for moving from the first landmark to a second landmark based on current location information and current pose information of the capsule endoscope.

바람직하게는, 상기 캡슐내시경의 이동 경로를 생성하는 단계는, 상기 캡슐내시경이 기설정된 랜드마크의 순서를 따라 순차적으로 이동하도록 상기 캡슐내시경의 이동 경로를 생성할 수 있다. Preferably, the step of generating the movement path of the capsule endoscope can generate the movement path of the capsule endoscope so that the capsule endoscope moves sequentially according to the order of preset landmarks.

바람직하게는, 상기 랜드마크는, 위분문(Cardia), 위저부(Fundus), 대만곡(greater curvature), 각(Angulus), 또는 유문(Pylorus) 중 적어도 하나의 위 내부의 영역을 포함할 수 있다. Preferably, the landmark may include an area within the stomach of at least one of the Cardia, Fundus, Greater Curvature, Angulus, or Pylorus.

또한 본 발명은, 인체 내부에 삽입된 캡슐내시경으로부터 촬영된 영상을 획득하는 영상 수신부; 상기 캡슐내시경에서 획득된 영상으로부터 랜드마크를 검출하고, 3차원으로 복원된 위 데이터로부터 상기 캡슐내시경의 위치 및 포즈 정보를 파악하는 연산부; 상기 캡슐내시경의 위치 정보 및 포즈 정보, 상기 랜드마크, 또는 3차원 위 내시경 지도 중 적어도 하나를 기초로 캡슐내시경의 이동 경로를 생성하는 탐색부; 및 상기 탐색부가 생성한 이동 경로를 따라 이동하도록 상기 캡슐내시경을 제어하는 제어부;를 포함하는 것을 다른 특징으로 한다.In addition, the present invention is characterized in that it includes an image receiving unit that acquires an image captured from a capsule endoscope inserted into a human body; a calculation unit that detects a landmark from the image acquired from the capsule endoscope and determines position and pose information of the capsule endoscope from three-dimensionally restored upper data; a search unit that generates a movement path of the capsule endoscope based on at least one of the position information and pose information of the capsule endoscope, the landmark, or a three-dimensional upper endoscope map; and a control unit that controls the capsule endoscope to move along the movement path generated by the search unit.

바람직하게는, 상기 제어부는, 외부 자기장을 이용하여, 상기 캡슐내시경의 구동을 제어할 수 있다.Preferably, the control unit can control the operation of the capsule endoscope using an external magnetic field.

본 발명은 위 내부를 탐색할 수 있도록 실시간으로 캡슐내시경의 이동 경로를 안내함으로써, 편리하고 정확한 진단 및 검사를 가능하게 한다는 이점이 있다. The present invention has the advantage of enabling convenient and accurate diagnosis and examination by guiding the movement path of a capsule endoscope in real time so as to explore the inside of the stomach.

또한, 본 발명은 캡슐내시경이 촬영한 영상을 가상 3D 데이터로 구축하여 캡슐내시경의 현재 위치를 확인할 수 있고, 시각적으로 효과적인 경로 안내를 제공할 수 있다는 이점이 있다.In addition, the present invention has the advantage of being able to confirm the current position of the capsule endoscope by constructing an image captured by the capsule endoscope into virtual 3D data and providing visually effective path guidance.

도 1은 본 발명의 실시예에 따른 캡슐내시경의 3차원 위 내비게이션 방법의 흐름도를 나타낸다. Figure 1 shows a flow chart of a three-dimensional upper navigation method of a capsule endoscope according to an embodiment of the present invention.

도 2는 본 발명의 실시예에 따른 캡슐내시경의 3차원 위 내비게이션 방법의 일 실시예를 나타낸다. FIG. 2 illustrates an example of a three-dimensional navigation method of a capsule endoscope according to an embodiment of the present invention.

도 3은 본 발명의 실시예에 따른 기생성된 3차원 랜드마크 데이터가 3차원으로 복원된 위 데이터에 정합되는 과정을 나타낸다. FIG. 3 illustrates a process in which parasitic 3D landmark data according to an embodiment of the present invention is aligned with the above data restored in 3D.

도 4는 본 발명의 실시예에 따른 위 내부 주요 랜드마크 정보를 기반으로 캠슐내시경의 현재 위치를 파악하는 예를 도시한 도면이다.FIG. 4 is a drawing illustrating an example of determining the current position of a capsule endoscope based on major landmark information inside the stomach according to an embodiment of the present invention.

도 5은 본 발명의 실시예에 따른 처음 랜드마크를 검출한 경우의 경로 탐색 과정을 나타낸다.Figure 5 shows a path search process when the first landmark is detected according to an embodiment of the present invention.

도 6는 본 발명의 실시예에 따른 기설정된 랜드마크의 순서를 따라 순차적으로 이동하는 캡슐내시경을 나타낸다. FIG. 6 illustrates a capsule endoscope moving sequentially along the order of preset landmarks according to an embodiment of the present invention.

도 7은 본 발명의 다른 실시예인 3차원 내비게이션 시스템의 구성도를 나타낸다.Figure 7 shows a configuration diagram of a three-dimensional navigation system, which is another embodiment of the present invention.

인체 내부에 삽입된 캡슐내시경으로부터 촬영된 영상을 획득하는 단계,A step of acquiring images taken from a capsule endoscope inserted into the human body;

상기 캡슐내시경에서 획득된 영상으로부터 랜드마크를 검출하고, 상기 캡슐내시경의 위치 및 포즈 정보를 파악하는 단계, 그리고A step of detecting a landmark from an image acquired from the capsule endoscope and determining the position and pose information of the capsule endoscope, and

상기 캡슐내시경의 위치 정보 및 포즈 정보, 상기 랜드마크, 또는 3차원 위 내시경 지도 중 적어도 하나를 기초로 캡슐내시경의 이동 경로를 생성하는 단계A step of generating a movement path of the capsule endoscope based on at least one of the position information and pose information of the capsule endoscope, the landmark, or a 3D upper gastrointestinal endoscope map.

를 포함하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional upper navigation method of a capsule endoscope including a.

이하, 첨부된 도면들에 기재된 내용들을 참조하여 본 발명을 상세히 설명한다. 다만, 본 발명이 예시적 실시 예들에 의해 제한되거나 한정되는 것은 아니다. 각 도면에 제시된 동일 참조부호는 실질적으로 동일한 기능을 수행하는 부재를 나타낸다.Hereinafter, the present invention will be described in detail with reference to the contents described in the attached drawings. However, the present invention is not limited or restricted by the exemplary embodiments. The same reference numerals presented in each drawing represent components that perform substantially the same function.

본 발명의 목적 및 효과는 하기의 설명에 의해서 자연스럽게 이해되거나 보다 분명해질 수 있으며, 하기의 기재만으로 본 발명의 목적 및 효과가 제한되는 것은 아니다. 또한, 본 발명을 설명함에 있어서 본 발명과 관련된 공지 기술에 대한 구체적인 설명이, 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략하기로 한다.The purpose and effect of the present invention can be naturally understood or made clearer by the following description, and the purpose and effect of the present invention are not limited to the following description alone. In addition, when explaining the present invention, if it is judged that a detailed description of a known technology related to the present invention may unnecessarily obscure the gist of the present invention, the detailed description will be omitted.

본 발명에서 사용하는 용어는 단지 특정한 실시예들을 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 출원에서, "포함하다" 또는 "가지다" 등의 용어는 발명의 설명에 기재된 특징, 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terminology used herein is only used to describe specific embodiments and is not intended to limit the present invention. The singular expression includes the plural expression unless the context clearly indicates otherwise. In this application, the terms "comprises" or "has" and the like are intended to specify the presence of a feature, number, step, operation, component, part or combination thereof described in the description of the invention, but should be understood to not exclude in advance the possibility of the presence or addition of one or more other features, numbers, steps, operations, components, parts or combinations thereof.

제1, 제2 등의 용어는 다양한 구성 요소들을 설명하는데 사용될 수 있지만, 상기 구성 요소들은 상기 용어들에 의해 한정되어서는 안된다. 상기 용어들은 하나의 구성 요소를 다른 구성 요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성 요소는 제2 구성 요소로 명명될 수 있고, 유사하게 제2 구성 요소도 제1 구성 요소로 명명될 수 있다.The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.

다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 갖는다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥상 가지는 의미와 일치하는 의미를 갖는 것으로 해석되어야 하며, 본 발명에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms defined in commonly used dictionaries, such as those defined in common dictionaries, should be interpreted as having a meaning consistent with the meaning they have in the context of the relevant art, and shall not be interpreted in an idealized or overly formal sense unless expressly defined herein.

구성 요소를 해석함에 있어서, 별도의 명시적 기재가 없더라도 오차 범위를 포함하는 것으로 해석한다. 시간 관계에 대한 설명일 경우, 예를 들어, '~후에', '~에 이어서', '~다음에', '~전에' 등으로 시간 적 선후관계가 설명되는 경우, '바로' 또는 '직접'이 사용되지 않는 이상 연속적이지 않은 경우도 포함한다.In interpreting the components, even if there is no separate explicit description, it is interpreted as including the error range. In the case of a description of a temporal relationship, for example, if the temporal continuity is described as 'after', 'following', 'next to', 'before', etc., it also includes cases where it is not continuous unless 'right away' or 'directly' is used.

이하, 첨부한 도면들을 참조하여 본 발명의 기술적 구성을 상세하게 설명한다.Hereinafter, the technical configuration of the present invention will be described in detail with reference to the attached drawings.

도 1은 본 발명의 실시예에 따른 캡슐내시경의 3차원 위 내비게이션 방법의 흐름도를 나타낸다. 도 1을 참조하면, 캡슐내시경의 3차원 위 내비게이션 방법은 촬영된 영상을 획득하는 단계(S101), 캡슐내시경의 위치 및 포즈 정보를 파악하는 단계(S103), 캡슐내시경의 이동 경로를 생성하는 단계(S105), 캡슐내시경을 제어하는 단계(S107), 및 데이터 저장단계(S109)를 포함할 수 있다. FIG. 1 is a flowchart of a three-dimensional upper navigation method of a capsule endoscope according to an embodiment of the present invention. Referring to FIG. 1, the three-dimensional upper navigation method of a capsule endoscope may include a step of acquiring a photographed image (S101), a step of determining position and pose information of a capsule endoscope (S103), a step of generating a movement path of the capsule endoscope (S105), a step of controlling the capsule endoscope (S107), and a data storage step (S109).

도 1에서는 본 실시예에 따른 캡슐내시경의 3차원 위 내비게이션 방법의 각 단계가 순차적으로 수행되는 형태를 개시하였으나, 반드시 이에 한정되지 않으며 이동 경로를 생성하고 캡슐내시경(300)이 이동 경로에 따라 이동하도록 구동하는 과정에서 도 1에 예시된 각 단계가 동시 또는 별시에 수행될 수 있다.In FIG. 1, each step of the three-dimensional navigation method of the capsule endoscope according to the present embodiment is sequentially performed, but it is not necessarily limited to this, and each step exemplified in FIG. 1 may be performed simultaneously or separately in the process of generating a movement path and driving the capsule endoscope (300) to move along the movement path.

캡슐내시경의 3차원 위 내비게이션 방법은 인체 내 위치한 캡슐내시경(300)이 위 내부를 탐색할 수 있도록 실시간으로 3차원 경로를 안내하여, 검사자 또는 의사가 편리하고 정확한 진단 및 검사를 수행할 수 있는 환경을 제공할 수 있다. 캡슐내시경의 3차원 위 내비게이션 방법은 3차원 경로 탐색이 가능하도록 실시간으로 촬영되는 캡슐내시경(300) 영상을 3차원 데이터로 복원하고, 3차원으로 복원된 위 데이터와 기생성된 3차원 랜드마크 데이터를 정합하여 3차원으로 복원된 위 데이터에서 랜드마크를 시각화할 수 있고, 이를 통해 시각적으로 효과적인 경로 안내를 제공할 수 있다. A three-dimensional upper navigation method of a capsule endoscope can provide an environment in which an examiner or doctor can conveniently and accurately perform a diagnosis and examination by guiding a three-dimensional path in real time so that a capsule endoscope (300) positioned inside a human body can explore the inside of the stomach. The three-dimensional upper navigation method of a capsule endoscope can restore a capsule endoscope (300) image captured in real time into three-dimensional data so that three-dimensional path exploration is possible, and the three-dimensionally restored stomach data and the generated three-dimensional landmark data are aligned so that landmarks can be visualized in the three-dimensionally restored stomach data, thereby providing visually effective path guidance.

캡슐내시경의 3차원 위 내비게이션 방법은 캡슐내시경(300)의 현재 위치를 파악하고 포즈 정보를 계산하여 3차원으로 복원된 위 데이터 상에서 입체적인 이동 경로를 제공할 수 있다. 3차원 내비게이션 경로 탐색 방법은 장기 내부의 여러 개의 랜드마크를 설정된 순서에 따라 순차적으로 탐색하게 할 수 있다. The three-dimensional upper navigation method of the capsule endoscope can identify the current position of the capsule endoscope (300) and calculate pose information to provide a three-dimensional movement path on the three-dimensionally restored upper data. The three-dimensional navigation path search method can sequentially search multiple landmarks inside the organ according to a set order.

도 2는 본 발명의 실시예에 따른 캡슐내시경의 3차원 위 내비게이션 방법의 일 실시예를 나타낸다. 도 2를 참조하면, 캡슐내시경의 3차원 위 내비게이션 방법은 실시간으로 촬영되는 캡슐내시경 영상을 Visual SLAM 알고리즘 기반으로 3차원 위 데이터로 복원할 수 있다(S201). 동시에, 캡슐내시경의 3차원 위 내비게이션 방법은 촬영되는 캡슐내시경 영상에서 랜드마크를 검출할 수 있다. 캡슐내시경의 3차원 위 내비게이션 방법은 랜드마크가 검출되면 기생성된 3차원 랜드마크 데이터를 3차원으로 복원된 위 데이터에 정합 후 랜드마크를 시각화할 수 있다(S203). 캡슐내시경의 3차원 위 내비게이션 방법은 3차원 위 데이터로 복원하는 과정에서 캡슐내시경(300)의 위치와 포즈 정보를 실시간으로 파악할 수 있고, 랜드마크가 검출되면 캡슐내시경(300)의 현재 위치와 현재 포즈 정보를 추출할 수 있다(S205). 이때, 추출된 현재 위치와 현재 포즈 정보는 이동 경로 생성에 기초가 될 수 있다. FIG. 2 illustrates an example of a three-dimensional navigation method of a capsule endoscope according to an embodiment of the present invention. Referring to FIG. 2, the three-dimensional navigation method of the capsule endoscope can restore a capsule endoscope image captured in real time into three-dimensional stomach data based on a Visual SLAM algorithm (S201). At the same time, the three-dimensional navigation method of the capsule endoscope can detect a landmark in the captured capsule endoscope image. When a landmark is detected, the three-dimensional navigation method of the capsule endoscope can visualize the landmark by aligning the generated three-dimensional landmark data with the three-dimensionally restored stomach data (S203). The three-dimensional navigation method of the capsule endoscope can grasp the position and pose information of the capsule endoscope (300) in real time during the process of restoring the three-dimensional stomach data, and when a landmark is detected, the current position and current pose information of the capsule endoscope (300) can be extracted (S205). At this time, the extracted current location and current pose information can be used as a basis for generating a movement path.

캡슐내시경의 3차원 위 내비게이션 방법은 랜드마크가 처음 검출되었을 때, 현재의 랜드마크를 기준으로 이동 동선을 생성할 수 있다(S207). 캡슐내시경의 3차원 위 내비게이션 방법은 현재의 랜드마크가 제1 랜드마크이면, 다음 탐색 순서인 제2 랜드마크의 위치로 이동하는 이동 경로를 안내할 수 있다(S209). 캡슐내시경의 3차원 위 내비게이션 방법은 현재의 랜드마크가 제1 랜드마크가 아니면, 제1 랜드마크의 위치로 이동하는 이동 경로를 안내할 수 있다(S211). 캡슐내시경의 3차원 위 내비게이션 방법은 캡슐내시경(300)이 기설정된 순서대로 랜드마크를 탐색 후 마지막 랜드마크에 도착하면 위내시경 탐색을 종료할 수 있다(S213).The three-dimensional upper navigation method of the capsule endoscope can generate a movement path based on the current landmark when a landmark is first detected (S207). The three-dimensional upper navigation method of the capsule endoscope can guide a movement path to move to the location of the second landmark, which is the next search order, if the current landmark is the first landmark (S209). The three-dimensional upper navigation method of the capsule endoscope can guide a movement path to move to the location of the first landmark if the current landmark is not the first landmark (S211). The three-dimensional upper navigation method of the capsule endoscope can end the gastrointestinal endoscope search when the capsule endoscope (300) searches landmarks in a preset order and arrives at the last landmark (S213).

이하, 3차원 내비게이션 경로 탐색 방법의 각 단계에 대해 상세히 설명한다. Below, each step of the 3D navigation path search method is described in detail.

촬영된 영상을 획득하는 단계(S101)는 인체 내부에 삽입된 캡슐내시경으로부터 촬영된 영상을 획득할 수 있다. The step of acquiring a captured image (S101) can acquire an image captured from a capsule endoscope inserted into the human body.

촬영된 영상을 획득하는 단계(S101)는 캡슐내시경(300)에서 촬영된 영상을 실시간으로 3차원 위 데이터로 복원할 수 있다. 촬영된 영상을 획득하는 단계(S101)는 3차원으로 복원된 위 데이터를 이용하여 이동 경로를 안내함으로써, 캡슐내시경(300) 영상의 시인성 저하, 영상 왜곡 등의 문제를 방지할 수 있다. The step (S101) of acquiring the captured image can restore the image captured by the capsule endoscope (300) into 3D data in real time. The step (S101) of acquiring the captured image can prevent problems such as reduced visibility and image distortion of the capsule endoscope (300) image by guiding the movement path using the 3D restored data.

촬영된 영상을 획득하는 단계(S101)는 3차원 위 데이터 복원의 사전 작업으로 캡슐내시경(300)에서 촬영된 영상을 가상으로 염색된 영상으로 변환할 수 있다. 촬영된 영상을 획득하는 단계(S101)는 캡슐내시경(300)에서 촬영된 영상을 연속된 프레임에서 R,G,B 채널 별로 영상 분리 후 가상으로 염색된 영상으로 변환할 수 있다. 촬영된 영상을 획득하는 단계(S101)는 장기가 염색되지 않은 내시경 영상과 장기가 염색된 내시경 영상으로 구성된 데이터 세트로 학습된 딥러닝 모델을 이용할 수 있다. 본 실시예는 3차원 위 데이터 복원의 사전 작업이 촬영된 영상을 획득하는 단계(S101)에서 수행되는 것으로 설명하고 있으나, 이에 한정되지 않는다. The step (S101) of acquiring a photographed image can be a preliminary task of three-dimensional stomach data restoration by converting an image captured by a capsule endoscope (300) into a virtually stained image. The step (S101) of acquiring a photographed image can be a step of separating images captured by a capsule endoscope (300) into R, G, and B channels from consecutive frames and then converting them into virtually stained images. The step (S101) of acquiring a photographed image can use a deep learning model trained with a data set consisting of an endoscopic image in which an organ is not stained and an endoscopic image in which an organ is stained. The present embodiment describes that the preliminary task of three-dimensional stomach data restoration is performed in the step (S101) of acquiring a photographed image, but is not limited thereto.

촬영된 영상을 획득하는 단계(S101)는 캡슐내시경(300)에서 촬영된 영상을 가상으로 염색된 영상으로 변환하여 이후 진행될 3차원 위 데이터 복원의 성능을 향상시킬 수 있다. 즉, 가상으로 염색된 영상으로 3차원 위 데이터를 복원하면, 염색되지 않은 영상에 비해 더 많은 특징점을 추출하여 실제 위 내부와 유사한 3차원 위 데이터를 복원할 수 있다. The step (S101) of acquiring the captured image can improve the performance of the subsequent 3D stomach data restoration by converting the image captured by the capsule endoscope (300) into a virtually dyed image. That is, when the 3D stomach data is restored using the virtually dyed image, more feature points can be extracted compared to the undyed image, thereby restoring the 3D stomach data similar to the actual inside of the stomach.

3차원으로 복원된 위 데이터는 촬영된 영상 또는 가상으로 염색된 영상에서 현재 프레임과 이전 프레임의 특징점을 추출한 후, 추출된 특징점을 매칭하여 실시간으로 형성될 수 있다. The above data restored in 3D can be formed in real time by extracting feature points of the current frame and the previous frame from a captured image or a virtually dyed image, and then matching the extracted feature points.

종래에는 특징점 추출 및 특징점 매칭에 SfM(Structure from Motion) 알고리즘을 이용하였다. SfM 알고리즘은 복수의 시점에서 한 개의 대상에 대해 획득된 영상을 기반으로 공통된 특징점 검출 및 상관 관계를 구하여 3차원 데이터를 복원한다. 그러나 캡슐내시경(300)에 의해 실시간으로 촬영된 영상을 3차원 데이터로 복원하는 본 발명의 특성상 SfM 알고리즘은 적합하지 않다. In the past, the SfM (Structure from Motion) algorithm was used for feature point extraction and feature point matching. The SfM algorithm recovers three-dimensional data by detecting common feature points and finding correlations based on images acquired for one object at multiple points in time. However, the SfM algorithm is not suitable for the characteristics of the present invention, which recovers images captured in real time by a capsule endoscope (300) into three-dimensional data.

본 발명에 따른 실시예에서 촬영된 영상을 획득하는 단계(S101)는 실시간으로 촬영되어 전송된 영상을 3차원 위 데이터로 복원하는 방식을 사용한다. 일 실시예로, 촬영된 영상을 획득하는 단계(S101)는 Visual SLAM 알고리즘을 이용하여 실시간으로 촬영된 영상을 3차원 위 데이터로 복원할 수 있다. Visual SLAM 알고리즘은 SfM 알고리즘에 비해 연산량이 적으므로 실시간 3D 데이터 구축에 유리하다.In an embodiment of the present invention, the step (S101) of acquiring a photographed image uses a method of restoring an image photographed and transmitted in real time into three-dimensional data. In one embodiment, the step (S101) of acquiring a photographed image can restore an image photographed in real time into three-dimensional data using a Visual SLAM algorithm. The Visual SLAM algorithm is advantageous in constructing real-time 3D data because it requires less computation than the SfM algorithm.

도 3은 본 발명의 실시예에 따른 기생성된 3차원 랜드마크 데이터가 3차원으로 복원된 위 데이터에 정합되는 과정을 나타낸다. 도 3을 참조하면, 촬영된 영상을 획득하는 단계(S101)는 3차원으로 복원된 위 데이터를 기생성된 3차원 랜드마크 데이터와 정합(registration)시켜 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계를 포함할 수 있다. 여기에서, 영상 정합은 서로 다른 영상을 변형하여 하나의 좌표계에 나타내는 처리기법을 의미한다. FIG. 3 illustrates a process in which parasitic 3D landmark data according to an embodiment of the present invention is registered to 3D restored upper data. Referring to FIG. 3, the step (S101) of acquiring a photographed image may include a step of registering the 3D restored upper data with the parasitic 3D landmark data to form 3D upper data with the landmarks registered. Here, image registration refers to a processing technique of transforming different images and representing them in a single coordinate system.

정합과정을 구체적으로 살펴보면, 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 촬영된 영상에서 랜드마크를 검출한 경우, 기생성된 3차원 랜드마크 데이터에서 검출된 랜드마크와 동일한 랜드마크를 추출하여 3차원으로 복원된 위 데이터에 정합할 수 있다. 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 기생성된 3차원 랜드마크 데이터와 상기 3차원으로 복원된 위 데이터를 구성하는 점들 사이의 유클리드 거리가 가장 가깝도록 상기 기생성된 3차원 랜드마크 데이터를 이동 및 회전시켜 상기 3차원으로 복원된 위 데이터에 오버랩(overlap)할 수 있다. Looking specifically at the alignment process, in the step of forming the 3D upper data with the landmarks aligned, if a landmark is detected from a captured image, the same landmark as the landmark detected from the parasitic 3D landmark data can be extracted and aligned to the 3D restored upper data. In the step of forming the 3D upper data with the landmarks aligned, the parasitic 3D landmark data can be moved and rotated so that the Euclidean distance between the points constituting the parasitic 3D landmark data and the 3D restored upper data is the closest, and the parasitic 3D landmark data can be overlapped with the 3D restored upper data.

일 실시예로, 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 ICP(Iterative Closest Point) 알고리즘을 이용하여 데이터 정합을 수행할 수 있다. ICP 알고리즘은 두 개의 point cloud가 주어졌을 때, corresponding point 사이의 거리를 계산하여, 이 거리를 최소화하는 Transformation matrix을 찾는 알고리즘이다.As an example, the step of forming 3D data with landmark alignment can perform data alignment using the ICP (Iterative Closest Point) algorithm. The ICP algorithm is an algorithm that calculates the distance between corresponding points when two point clouds are given and finds a transformation matrix that minimizes this distance.

랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 랜드마크가 정합된 3차원 위 데이터에서 정합된 랜드마크 부분을 다른 부분과 구별되도록 시각화할 수 있다. The step of forming landmark-aligned 3D data can visualize the landmark-aligned 3D data so that the landmark portion is distinguished from other portions.

랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 가상 3D 데이터에 3차원 랜드마크 데이터를 정합하고 랜드마크를 시각화함으로써, 3차원으로 구축된 위 내부에서 대표적인 랜드마크의 위치를 파악할 수 있게 하며, 시각적으로 효과적인 3D 내비게이션 기능을 제공할 수 있다. The step of forming 3D landmark-aligned stomach data aligns 3D landmark data to virtual 3D data and visualizes the landmarks, thereby enabling the location of representative landmarks within the 3D-constructed stomach to be identified, and providing a visually effective 3D navigation function.

랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 랜드마크 검출의 사전 작업으로 3차원 랜드마크 데이터를 생성할 수 있다. 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 캡슐 내시경(300)으로 촬영된 위 내부의 대표적인 랜드마크 영역에 해당하는 여러 프레임의 이미지로 구성된 데이터 세트로 학습된 딥러닝 모델을 이용할 수 있다. 검출단계(S103)는 위 내부의 대표적인 랜드마크 영역에 해당하는 이미지를 학습된 딥러닝 모델에 입력하여 3차원 랜드마크 데이터를 생성할 수 있다. The step of forming 3D stomach data with aligned landmarks can generate 3D landmark data as a preliminary task of landmark detection. The step of forming 3D stomach data with aligned landmarks can utilize a deep learning model trained with a data set composed of images of several frames corresponding to representative landmark areas inside the stomach photographed with a capsule endoscope (300). The detection step (S103) can generate 3D landmark data by inputting an image corresponding to a representative landmark area inside the stomach into the trained deep learning model.

랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 후술할 캡슐내시경의 이동 경로를 생성하는 단계(S105)에서 수행되는 이동 경로 생성의 사전 작업으로 이동 경로의 이정표가 되는 3차원 위 내시경 지도를 생성할 수 있다. 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는 일반적 또는 보편적인 사람의 캡슐내시경 영상을 3차원 위 데이터로 변환하고, 변환된 3차원 위 데이터에 기생성된 3차원 랜드마크 데이터를 정합하여 3차원 위 내시경 지도를 생성할 수 있다. 본 실시예는 랜드마크 검출 및 이동 경로 생성의 사전 작업이 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계에서 수행되는 것으로 설명하고 있으나, 이에 한정되지 않는다.The step of forming landmark-aligned 3D stomach data is a preliminary task of generating a movement path performed in the step (S105) of generating a movement path of a capsule endoscope to be described later, and can generate a 3D gastroscopy map that serves as a landmark of the movement path. The step of forming landmark-aligned 3D stomach data can convert a general or universal human capsule endoscope image into 3D stomach data, and generate a 3D gastroscopy map by aligning the generated 3D landmark data to the converted 3D stomach data. The present embodiment describes that the preliminary task of landmark detection and movement path generation is performed in the step of forming landmark-aligned 3D stomach data, but is not limited thereto.

도 4은 본 발명의 실시예에 따른 위 내부 주요 랜드마크 정보를 기반으로 캡슐내시경의 현재 위치를 파악하는 예를 나타낸다. 도 4를 참조하면, 캡슐내시경의 위치 및 포즈 정보를 파악하는 단계(S103)는 캡슐내시경(300)에서 획득된 영상으로부터 랜드마크를 검출할 수 있다. 여기에서 랜드마크란 위 내부에서 특정 위치 또는 특정 부위를 식별할 수 있는 영역을 의미한다. 예를 들면, 위의 대표적인 랜드마크는 위분문(Cardia), 위저부(Fundus), 대만곡(greater curvature), 각(Angulus), 또는 유문(Pylorus) 중 적어도 하나의 위 내부의 영역을 포함할 수 있다.FIG. 4 shows an example of identifying the current position of a capsule endoscope based on major landmark information inside the stomach according to an embodiment of the present invention. Referring to FIG. 4, the step (S103) of identifying the position and pose information of the capsule endoscope can detect a landmark from an image acquired from the capsule endoscope (300). Here, a landmark means an area that can identify a specific position or a specific part inside the stomach. For example, a representative landmark of the stomach can include at least one area inside the stomach among the cardia, fundus, greater curvature, angulus, and pylorus.

도 4의 좌측의 그림을 참조하면, 위 내부에 위분문&위저부(Cardia&Fundus, Ex.1), 대만곡(Greater Curvature, Ex.2), 각(Angulus, Ex.3), 또는 유문(Pylorus, Ex.4)으로 구성된 4개의 랜드마크가 설정되어 있다. 도 4의 우측의 그림을 참조하면, 랜드마크가 정합된 3차원 위 데이터상의 랜드마크는 3차원으로 복원된 위 데이터의 랜드마크보다 시인성이 향상됨을 확인할 수 있다. Referring to the left picture of Fig. 4, four landmarks are set inside the stomach, consisting of Cardia & Fundus (Ex. 1), Greater Curvature (Ex. 2), Angulus (Ex. 3), or Pylorus (Ex. 4). Referring to the right picture of Fig. 4, it can be confirmed that the landmarks on the 3D stomach data with aligned landmarks have better visibility than the landmarks on the 3D restored stomach data.

캡슐내시경의 위치 및 포즈 정보를 파악하는 단계(S103)는 2차원 랜드마크 데이터 셋으로 학습된 인공지능에 촬영된 영상을 입력하는 단계를 포함할 수 있다. 2차원 랜드마크 데이터 셋으로 학습된 인공지능에 캡슐내시경(300)이 촬영한 영상을 입력하면 실시간으로 랜드마크 검출이 수행될 수 있다. 이때, 사용되는 인공지능은 분류(Classification) 알고리즘 일 수 있다. The step (S103) of identifying the position and pose information of the capsule endoscope may include a step of inputting the captured image into an artificial intelligence learned with a two-dimensional landmark data set. When the image captured by the capsule endoscope (300) is input into an artificial intelligence learned with a two-dimensional landmark data set, landmark detection can be performed in real time. At this time, the artificial intelligence used may be a classification algorithm.

캡슐내시경의 위치 및 포즈 정보를 파악하는 단계(S103)는 캡슐내시경(300)의 위치 및 포즈 정보를 파악할 수 있다. 여기에서, 위치란 위 내부에서 캡슐내시경(300)의 대략적인 위치를 의미하며, 포즈 정보란 캡슐내시경(300)에 내장된 카메라의 위치, 각도(회전각) 또는 이동방향 등을 의미한다. 캡슐내시경의 위치 및 포즈 정보를 파악하는 단계(S103)는 촬영된 영상의 현재 프레임과 이전 프레임의 특징점을 추출한 후, 추출된 특징점을 매칭을 통해 추출한 상기 캡슐내시경의 이동 정도와 회전 정도를 기초로 포즈 정보를 파악할 수 있다. 파악된 위치와 포즈 정보는 이동 경로 생성 등에 사용될 수 있다.The step (S103) of identifying the position and pose information of the capsule endoscope can identify the position and pose information of the capsule endoscope (300). Here, the position means the approximate position of the capsule endoscope (300) inside the stomach, and the pose information means the position, angle (rotation angle), or movement direction of the camera built into the capsule endoscope (300). The step (S103) of identifying the position and pose information of the capsule endoscope can identify the pose information based on the degree of movement and degree of rotation of the capsule endoscope extracted by matching the extracted feature points after extracting the feature points of the current frame and the previous frame of the captured image. The identified position and pose information can be used for generating a movement path, etc.

캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 추출하는 단계는 여러 개의 랜드마크 중 어느 하나의 랜드마크를 검출한 경우, 파악된 상기 캡슐내시경의 위치 및 포즈 정보로부터 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 추출할 수 있다. The step of extracting the current position information and current pose information of the capsule endoscope can extract the current position information and current pose information of the capsule endoscope from the identified position and pose information of the capsule endoscope when any one landmark among multiple landmarks is detected.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 캡슐내시경의 위치 정보 및 포즈 정보, 랜드마크, 또는 3차원 위 내시경 지도 중 적어도 하나를 기초로 캡슐내시경의 이동 경로를 생성할 수 있다. The step of generating a movement path of the capsule endoscope (S105) can generate the movement path of the capsule endoscope based on at least one of position information and pose information of the capsule endoscope, a landmark, or a 3D upper gastrointestinal endoscope map.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 랜드마크가 검출되면 캡슐내시경(300)이 검출된 랜드마크 주변을 탐색할 수 있는 이동 경로를 생성할 수 있다. 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 랜드마크가 검출되면 캡슐내시경(300)이 다음 탐색할 랜드마크의 위치로 이동하는 이동 경로를 생성할 수 있다. 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 랜드마크가 검출되면 캡슐내시경(300)이 검출된 랜드마크 주변을 탐색한 후 다음 탐색할 랜드마크의 위치로 이동하는 이동 경로를 생성할 수 있다. The step (S105) of generating a movement path of the capsule endoscope can generate a movement path along which the capsule endoscope (300) can explore the periphery of the detected landmark when a landmark is detected. The step (S105) of generating a movement path of the capsule endoscope can generate a movement path along which the capsule endoscope (300) moves to the location of the next landmark to be explored when a landmark is detected. The step (S105) of generating a movement path of the capsule endoscope can generate a movement path along which the capsule endoscope (300) explores the periphery of the detected landmark when a landmark is detected and then moves to the location of the next landmark to be explored.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 캡슐내시경이 기설정된 랜드마크의 순서를 따라 순차적으로 이동하도록 상기 캡슐내시경의 이동 경로를 생성할 수 있다. 정리하면, 제1 내지 제3 랜드마크가 있고, 제1 랜드마크에서 제3 랜드마크 순서로 탐색하는 것으로 설정되어 있다고 가정하면, 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 캡슐내시경(300)이 제1 랜드마크 주변을 탐색한 후 제2 랜드마크의 위치로 이동하도록 이동 경로를 생성할 수 있다. 이어서, 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 캡슐내시경(300)이 제2 랜드마크의 위치에 도달하면, 제2 랜드마크 주변을 탐색한 후 제3 랜드마크의 위치로 이동하도록 이동 경로를 생성할 수 있다. 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 캡슐내시경(300)이 제3 랜드마크의 위치에 도달하면 탐색을 종료할 수 있다. The step (S105) of generating a movement path of the capsule endoscope can generate a movement path of the capsule endoscope so that the capsule endoscope moves sequentially according to the order of preset landmarks. In summary, assuming that there are first to third landmarks and that the search is set to proceed in the order of the first landmark to the third landmark, the step (S105) of generating a movement path of the capsule endoscope can generate a movement path so that the capsule endoscope (300) searches around the first landmark and then moves to the location of the second landmark. Subsequently, the step (S105) of generating a movement path of the capsule endoscope can generate a movement path so that, when the capsule endoscope (300) reaches the location of the second landmark, it searches around the second landmark and then moves to the location of the third landmark. The step (S105) of generating a movement path of the capsule endoscope can end the search when the capsule endoscope (300) reaches the location of the third landmark.

캡슐내시경의 이동 경로를 생성하는 단계(S105)에서 생성된 이동 경로는 위 내부 공간에서 캡슐내시경(300)이 제어될 수 있도록 3차원적으로 제공될 수 있다. 즉, 이동 경로는 2차원적으로 평면상에서 이동하는 경로를 의미하는 것이 아니라, 3차원적으로 공간상에서 이동하는 경로를 의미할 수 있다. 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 위 내부 공간상에서 캡슐내시경(300)이 이동할 수 있도록 캡슐내시경(300)의 위 내부 위치, 각도, 이동 방향을 모두 고려한 이동 경로를 생성할 수 있다. In the step (S105) of generating a movement path of the capsule endoscope, the generated movement path can be provided three-dimensionally so that the capsule endoscope (300) can be controlled in the internal space of the stomach. In other words, the movement path does not mean a path of movement on a two-dimensional plane, but can mean a path of movement in space three-dimensionally. The step (S105) of generating a movement path of the capsule endoscope can generate a movement path that takes into account the internal position, angle, and movement direction of the capsule endoscope (300) so that the capsule endoscope (300) can move in the internal space of the stomach.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 검사자 또는 의사가 외부에서 캡슐내시경(300)을 조작할 때 길잡이가 되는 이동 경로를 제공할 수 있다. 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 캡슐내시경(300)을 자동으로 제어하는 별도의 제어장치가 있는 경우, 제어장치에 이동 경로를 제공할 수 있다.The step (S105) of generating a movement path of the capsule endoscope can provide a movement path that serves as a guide when an examiner or doctor operates the capsule endoscope (300) from the outside. The step (S105) of generating a movement path of the capsule endoscope can provide a movement path to a control device if there is a separate control device that automatically controls the capsule endoscope (300).

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 어느 하나의 랜드마크가 제1 랜드마크가 아닌 경우, 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 기초로 상기 제1 랜드마크로 이동하는 경로를 생성하는 단계, 그리고 어느 하나의 랜드마크가 제1 랜드마크인 경우, 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 기초로 상기 제1 랜드마크에서 제2 랜드마크로 이동하는 경로를 생성하는 단계를 포함할 수 있다. The step (S105) of generating a movement path of the capsule endoscope may include a step of generating a path for moving to the first landmark based on current location information and current pose information of the capsule endoscope if any landmark is not the first landmark, and a step of generating a path for moving from the first landmark to a second landmark based on current location information and current pose information of the capsule endoscope if any landmark is the first landmark.

도 5는 본 발명의 실시예에 따른 처음 랜드마크를 검출한 경우의 경로 탐색 과정을 나타낸다. 도 5를 참조하면, 캡슐내시경(300)에서 촬영된 영상에서 최초로 랜드마크를 검출(S301)하면, 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 검출된 랜드마크가 탐색의 시작 지점인 제1 랜드마크인지를 판단할 수 있다(S303). 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 처음 검출된 랜드마크가 경로 탐색의 시작 지점인 제1 랜드마크가 아니면, 제1 랜드마크의 위치로 이동하는 이동 경로를 생성할 수 있다(S305). 이때, 제1 랜드마크가 아닌 다른 위치로 이동하는 경우를 대비하여, 제1 랜드마크의 위치로 이동한 이후 다시 한번 검출된 랜드마크가 제1 랜드마크인지 판단한다. FIG. 5 illustrates a path search process when a landmark is first detected according to an embodiment of the present invention. Referring to FIG. 5, when a landmark is first detected (S301) in an image captured by a capsule endoscope (300), a step (S105) of generating a movement path of the capsule endoscope can determine whether the detected landmark is a first landmark, which is a starting point of the search (S303). A step (S105) of generating a movement path of the capsule endoscope can generate a movement path to the location of the first landmark if the first detected landmark is not the first landmark, which is a starting point of the path search (S305). At this time, in preparation for a case of moving to a location other than the first landmark, it is determined once again whether the detected landmark is the first landmark after moving to the location of the first landmark.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 랜드마크가 경로 탐색의 시작 지점인 제1 랜드마크이면, 상기 제1 랜드마크 다음 지점인 제2 랜드마크의 위치로 이동하는 경로를 생성할 수 있다(S307). 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 제2 랜드마크의 위치로 이동하는 경로를 생성한 이후에는 기설정된 랜드마크 순서대로 이동하는 경로를 생성할 수 있다(S309). 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 마지막 랜드마크에 도달하면 경로 안내를 종료할 수 있다(S311).The step of generating a movement path of a capsule endoscope (S105) can generate a path to move to a location of a second landmark, which is a point following the first landmark, if the landmark is a first landmark, which is a starting point of the path search (S307). The step of generating a movement path of a capsule endoscope (S105) can generate a path to move to a location of a second landmark, which is a point following the first landmark, after generating a path to move to a location of the second landmark (S309). The step of generating a movement path of a capsule endoscope (S105) can end path guidance when the last landmark is reached (S311).

도 6는 본 발명의 실시예에 따른 기설정된 랜드마크의 순서를 따라 순차적으로 이동하는 캡슐내시경을 나타낸다. 도 6을 참고하면, 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 기설정된 랜드마크 순서에 따라 위분문&위저부(Cardia&Fundus, Ex.1), 대만곡(Greater Curvature, Ex.2), 각(Angulus, Ex.3), 유문(Pylorus, Ex.4) 순서로 캡슐내시경(300)의 이동 경로를 생성할 수 있다. 이때, 각 랜드마크는 기생성된 3차원 랜드마크 데이터와 정합되어 시인성이 향상됨을 확인할 수 있다. FIG. 6 illustrates a capsule endoscope that moves sequentially according to the order of preset landmarks according to an embodiment of the present invention. Referring to FIG. 6, the step (S105) of generating a movement path of the capsule endoscope may generate a movement path of the capsule endoscope (300) in the order of Cardia & Fundus (Ex. 1), Greater Curvature (Ex. 2), Angulus (Ex. 3), and Pylorus (Ex. 4) according to the order of preset landmarks. At this time, it can be confirmed that each landmark is aligned with the pre-generated 3D landmark data, thereby improving visibility.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 이동 경로를 생성할 때 추출한 캡슐내시경(300)의 위치와 포즈 정보를 이용하되, 3차원으로 복원된 위 데이터에서 복원되지 않은 위 내부 위치로 이동하는 이동 경로를 생성할 때에는 3차원 위 내시경 지도를 기준으로 이동 경로를 생성할 수 있다. The step (S105) of generating a movement path of a capsule endoscope uses the position and pose information of the capsule endoscope (300) extracted when generating the movement path, but when generating a movement path to move to an internal location of the stomach that has not been restored from the three-dimensionally restored stomach data, the movement path can be generated based on a three-dimensional gastrointestinal endoscope map.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 3차원으로 복원된 위 데이터에서 캡슐내시경(300)이 조작될 수 있도록 3차원 공간상에서 이동하는 경로를 생성할 수 있다. 따라서, 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 캡슐내시경(300)의 위치와 포즈 정보를 고려하여 위 내부 공간상에서 캡슐내시경(300)이 이동할 수 있도록 캡슐내시경(300)의 위 내부 위치, 각도, 이동 방향을 모두 고려한 이동 경로를 생성할 수 있다. The step (S105) of generating a movement path of the capsule endoscope can generate a movement path in a three-dimensional space so that the capsule endoscope (300) can be manipulated in the three-dimensionally restored stomach data. Therefore, the step (S105) of generating a movement path of the capsule endoscope can generate a movement path that takes into account the position, angle, and movement direction of the capsule endoscope (300) inside the stomach so that the capsule endoscope (300) can move in the space inside the stomach by considering the position and pose information of the capsule endoscope (300).

한편, 본 발명의 실시예에 따르면 촬영된 영상을 획득하는 단계(S101)은 캡슐내시경(300)에서 촬영된 영상을 실시간으로 3차원 위 데이터로 복원할 수 있으므로, 캡슐내시경(300)이 제1 랜드마크에 도달하였을 때는 다른 랜드마크까지 이동하는 이동 경로를 생성하기 위한 3차원 위 데이터가 구축되어 있지 않을 수 있다. 이 경우, 캡슐내시경의 이동 경로를 생성하는 단계(S105)는 사전에 구축된 3차원 위 내시경 지도를 기준으로 다음 랜드마크까지 이동하는 이동 경로를 생성할 수 있다. 대다수 사람들의 위 내부의 형태, 랜드마크의 위치 및 모양은 거의 유사하므로 3차원 위 내시경 지도를 활용할 수 있다. Meanwhile, according to an embodiment of the present invention, the step (S101) of acquiring a photographed image can restore the image photographed by the capsule endoscope (300) into 3D stomach data in real time, so that when the capsule endoscope (300) reaches the first landmark, 3D stomach data for generating a movement path to move to another landmark may not be constructed. In this case, the step (S105) of generating the movement path of the capsule endoscope can generate a movement path to move to the next landmark based on a 3D gastrointestinal endoscope map constructed in advance. Since the shape of the inside of the stomach of most people and the location and shape of landmarks are almost similar, the 3D gastrointestinal endoscope map can be utilized.

캡슐내시경의 이동 경로를 생성하는 단계(S105)는 3차원 위 내시경 지도를 활용하여 이동 경로를 생성한 이후, 해당 이동 경로에 대한 3차원 위 데이터가 복원되면, 3차원으로 복원된 위 데이터를 기준으로 이동 경로를 보정할 수 있다. In the step (S105) of generating a movement path of a capsule endoscope, a movement path is generated using a three-dimensional upper endoscope map. Then, when the three-dimensional upper data for the movement path is restored, the movement path can be corrected based on the three-dimensionally restored upper data.

캡슐내시경을 제어하는 단계(S107)는 캡슐내시경의 이동 경로를 생성하는 단계(S105)에서 생성된 이동 경로를 따라 이동하도록 캡슐내시경(300)을 제어할 수 있다. 제어단계(S107)는 외부 자기장을 이용하여, 캡슐내시경(300)의 구동을 제어하는 제어 장치의 자기장을 변화시키는 방식으로 캡슐내시경(300)을 제어할 수 있다. The step (S107) of controlling the capsule endoscope can control the capsule endoscope (300) to move along the movement path generated in the step (S105) of generating the movement path of the capsule endoscope. The control step (S107) can control the capsule endoscope (300) by changing the magnetic field of a control device that controls the operation of the capsule endoscope (300) using an external magnetic field.

저장단계(미도시)는 실시간으로 복원되는 3차원 위 데이터, 검출된 랜드마크 이미지, 랜드마크와 3차원으로 복원된 위 데이터가 정합된 데이터 등 본 발명을 수행하는데 있어 발생하는 각종 결과물을 저장할 수 있다. 기존의 위 내시경은 검사 후 특정 부분을 다시 확인해 볼 수가 없다는 문제점이 있으나, 저장단계는 캡슐내시경(300)이 촬영한 영상 및 재구성된 영상을 저장하므로 의사의 개인 능력에 따라 확인하지 못해 놓치는 부분이 존재할 수 있다는 문제점을 해결할 수 있다. The storage step (not shown) can store various results generated in performing the present invention, such as 3D stomach data restored in real time, detected landmark images, data in which landmarks and 3D restored stomach data are aligned. The existing gastrointestinal endoscope has a problem in that a specific part cannot be checked again after examination, but the storage step can solve the problem that there may be parts that are missed due to not being able to check depending on the doctor's individual ability, since the images captured by the capsule endoscope (300) and the reconstructed images are stored.

도 7은 본 발명의 다른 실시예인 3차원 내비게이션 시스템(100)의 구성도를 나타낸다. 도 7을 참조하면, 3차원 내비게이션 시스템(100)은 영상 수신부(110), 연산부(130), 탐색부(150), 및 제어부(170)를 포함할 수 있다. Fig. 7 shows a configuration diagram of a three-dimensional navigation system (100) according to another embodiment of the present invention. Referring to Fig. 7, the three-dimensional navigation system (100) may include an image receiving unit (110), a calculation unit (130), a search unit (150), and a control unit (170).

영상 수신부(110)는 인체 내부에 삽입된 캡슐내시경으로부터 촬영된 영상을 획득할 수 있다. 영상 수신부(110)는 무선 통신을 통해 캡슐내시경(300)에서 촬영된 영상을 수신할 수 있다. 캡슐내시경(300)은 초당 수 내지 수십장의 정지 영상 또는 동작 영상을 촬영할 수 있으며, 영상 수신부(110)에 실시간으로 데이터를 송신할 수 있다. 영상 수신부(110)는 전술한 촬영된 영상을 획득하는 단계(S101)의 동작을 수행할 수 있다. The image receiving unit (110) can obtain an image captured from a capsule endoscope inserted into the human body. The image receiving unit (110) can receive an image captured from the capsule endoscope (300) via wireless communication. The capsule endoscope (300) can capture several to several tens of still images or moving images per second and transmit data to the image receiving unit (110) in real time. The image receiving unit (110) can perform the operation of the step (S101) of acquiring the captured image described above.

연산부(130)는 캡슐내시경에서 획득된 영상으로부터 랜드마크를 검출하고, 3차원으로 복원된 위 데이터로부터 상기 캡슐내시경의 위치 및 포즈 정보를 파악할 수 있다. 연산부(130)는 전술한 캡슐내시경의 위치 및 포즈 정보를 파악하는 단계(S103)의 동작을 수행할 수 있다. The calculation unit (130) can detect landmarks from images acquired from a capsule endoscope and determine position and pose information of the capsule endoscope from the three-dimensionally restored data. The calculation unit (130) can perform the operation of the step (S103) of determining position and pose information of the capsule endoscope as described above.

탐색부(150)는 캡슐내시경에서 획득된 영상으로부터 랜드마크를 검출하고, 3차원으로 복원된 위 데이터로부터 상기 캡슐내시경의 위치 및 포즈 정보를 파악할 수 있다. 탐색부(150)는 전술한 캡슐내시경의 이동 경로를 생성하는 단계의 동작을 수행할 수 있다. The search unit (150) can detect landmarks from images acquired from a capsule endoscope and determine the position and pose information of the capsule endoscope from the three-dimensionally restored data. The search unit (150) can perform the operation of the step of generating the movement path of the capsule endoscope as described above.

제어부(170)는 탐색부가 생성한 이동 경로를 따라 이동하도록 상기 캡슐내시경을 제어할 수 있다. 제어부(170)는 전술한 캡슐내시경을 제어하는 단계(S107)의 동작을 수행할 수 있다. The control unit (170) can control the capsule endoscope to move along the movement path generated by the search unit. The control unit (170) can perform the operation of the step (S107) of controlling the capsule endoscope described above.

저장부(미도시)는 실시간으로 복원되는 3차원 위 데이터, 검출된 랜드마크 이미지, 랜드마크와 3차원으로 복원된 위 데이터가 정합된 데이터 등 본 발명을 수행하는데 있어 발생하는 각종 결과물을 저장할 수 있다. 저장부(미도시)는 전술한 저장단계의 동작을 수행할 수 있다. The storage unit (not shown) can store various results generated in performing the present invention, such as 3D stomach data restored in real time, detected landmark images, and data in which landmarks and 3D restored stomach data are aligned. The storage unit (not shown) can perform the operation of the storage step described above.

이상에서 대표적인 실시예를 통하여 본 발명을 상세하게 설명하였으나, 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자는 상술한 실시예에 대하여 본 발명의 범주에서 벗어나지 않는 한도 내에서 다양한 변형이 가능함을 이해할 것이다. 그러므로 본 발명의 권리 범위는 설명한 실시예에 국한되어 정해져서는 안 되며, 후술하는 특허청구범위뿐만 아니라 특허청구범위와 균등 개념으로부터 도출되는 모든 변경 또는 변형된 형태에 의하여 정해져야 한다.Although the present invention has been described in detail through representative examples above, those skilled in the art will understand that various modifications can be made to the above-described embodiments without departing from the scope of the present invention. Therefore, the scope of the rights of the present invention should not be limited to the described embodiments, but should be determined by all changes or modifications derived from the claims and equivalent concepts as well as the claims.

본 발명은 인체 내 위치한 캡슐내시경이 위 내부를 탐색할 수 있도록 실시간으로 3차원 경로를 안내하는 캡슐내시경의 3차원 위 내비게이션 방법 및 시스템을 제공하는 것을 일 목적으로 한다. The purpose of the present invention is to provide a three-dimensional stomach navigation method and system for a capsule endoscope that guides a three-dimensional path in real time so that a capsule endoscope positioned inside a human body can explore the inside of the stomach.

또한, 본 발명은 3차원 내비게이션 경로 탐색이 가능하도록 실시간으로 촬영되는 캡슐내시경 영상을 3D 데이터로 복원하고, 복원된 3D 데이터와 3차원 랜드마크 데이터를 정합하여 상기 3D 데이터에서 랜드마크를 시각화하고자 한다. In addition, the present invention aims to restore capsule endoscope images captured in real time to 3D data so as to enable 3D navigation path search, and to visualize landmarks in the 3D data by aligning the restored 3D data with 3D landmark data.

또한, 본 발명은 캡슐내시경이 이동할 다음 랜드마크 위치까지의 경로를 안내하여 캡슐 내시경이 장기 내부를 순차적으로 탐색하게 하고자 한다. In addition, the present invention aims to guide the capsule endoscope along a path to the next landmark location to enable the capsule endoscope to sequentially explore the inside of an organ.

Claims (13)

인체 내부에 삽입된 캡슐내시경으로부터 촬영된 영상을 획득하는 단계,A step of acquiring images taken from a capsule endoscope inserted into the human body; 상기 캡슐내시경에서 획득된 영상으로부터 랜드마크를 검출하고, 상기 캡슐내시경의 위치 및 포즈 정보를 파악하는 단계, 그리고A step of detecting a landmark from an image acquired from the capsule endoscope and determining the position and pose information of the capsule endoscope, and 상기 캡슐내시경의 위치 정보 및 포즈 정보, 상기 랜드마크, 또는 3차원 위 내시경 지도 중 적어도 하나를 기초로 캡슐내시경의 이동 경로를 생성하는 단계A step of generating a movement path of the capsule endoscope based on at least one of the position information and pose information of the capsule endoscope, the landmark, or a 3D upper gastrointestinal endoscope map. 를 포함하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional upper navigation method of a capsule endoscope including a. 제1항에 있어서,In the first paragraph, 상기 촬영된 영상을 획득하는 단계는, The steps for obtaining the above-mentioned recorded video are: 3차원으로 복원된 위 데이터를 기생성된 3차원 랜드마크 데이터와 정합시켜 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계Step of forming landmark-aligned 3D stomach data by aligning the 3D restored stomach data with the parasitic 3D landmark data. 를 포함하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional upper navigation method of a capsule endoscope including a. 제2항에 있어서,In the second paragraph, 상기 3차원으로 복원된 위 데이터는, The above three-dimensionally restored data is, 상기 촬영된 영상의 현재 프레임과 이전 프레임의 특징점을 추출한 후, 추출된 특징점을 매칭하여 실시간으로 형성되는 것을 특징으로 하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional navigation method for a capsule endoscope, characterized in that the method comprises extracting feature points of the current frame and the previous frame of the captured image and then forming the feature points in real time by matching the extracted feature points. 제2항에 있어서,In the second paragraph, 상기 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는,The step of forming the three-dimensional data above with the above landmarks aligned is: 상기 기생성된 3차원 랜드마크 데이터와 상기 3차원으로 복원된 위 데이터를 구성하는 점들 사이의 유클리드 거리가 가장 가깝도록 상기 기생성된 3차원 랜드마크 데이터를 이동 및 회전시켜 상기 3차원으로 복원된 위 데이터에 오버랩(overlap)하는 것을 특징으로 하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional upper navigation method of a capsule endoscope, characterized in that the parasitic three-dimensional landmark data is moved and rotated so that the Euclidean distance between the points constituting the parasitic three-dimensional landmark data and the three-dimensionally restored upper data is the closest, and the parasitic three-dimensional landmark data is overlapped with the three-dimensionally restored upper data. 제2항에 있어서,In the second paragraph, 상기 랜드마크가 정합된 3차원 위 데이터를 형성하는 단계는,The step of forming the three-dimensional data above with the above landmarks aligned is: 상기 랜드마크가 정합된 3차원 위 데이터에서 정합된 랜드마크 부분을 다른 부분과 구별되도록 시각화하는 것을 특징으로 하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional navigation method for a capsule endoscope, characterized in that the three-dimensional data in which the landmarks are aligned are visualized so that the aligned landmark portion is distinguished from other portions. 제1항에 있어서,In the first paragraph, 상기 캡슐내시경의 위치 및 포즈 정보를 계산하는 단계는,The step of calculating the position and pose information of the above capsule endoscope is: 상기 촬영된 영상의 현재 프레임과 이전 프레임의 특징점을 추출한 후, 추출된 특징점을 매칭을 통해 추출한 상기 캡슐내시경의 이동 정도와 회전 정도를 기초로 포즈 정보를 파악하는 것을 특징으로 하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional navigation method for a capsule endoscope, characterized in that the pose information is determined based on the degree of movement and rotation of the capsule endoscope by extracting feature points of the current frame and the previous frame of the captured image and then matching the extracted feature points. 제1항에 있어서,In the first paragraph, 상기 캡슐내시경의 위치 및 포즈 정보를 계산하는 단계는,The step of calculating the position and pose information of the above capsule endoscope is: 2차원 랜드마크 데이터 셋으로 학습된 인공지능에 상기 촬영된 영상을 입력하는 단계Step of inputting the above-mentioned captured image into the artificial intelligence trained with a 2D landmark data set 를 포함하는 것을 특징으로 하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional upper navigation method of a capsule endoscope, characterized by including a. 제1항에 있어서,In the first paragraph, 여러 개의 랜드마크 중 어느 하나의 랜드마크를 검출한 경우, 파악된 상기 캡슐내시경의 위치 및 포즈 정보로부터 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 추출하는 단계A step of extracting current position information and current pose information of the capsule endoscope from the identified position and pose information of the capsule endoscope when one of several landmarks is detected. 를 포함하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional upper navigation method of a capsule endoscope including a. 제8항에 있어서,In Article 8, 상기 캡슐내시경의 이동 경로를 생성하는 단계는,The step of generating the movement path of the above capsule endoscope is: 상기 어느 하나의 랜드마크가 제1 랜드마크가 아닌 경우, 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 기초로 상기 제1 랜드마크로 이동하는 경로를 생성하는 단계, 그리고If any of the above landmarks is not the first landmark, a step of generating a path to the first landmark based on the current location information and current pose information of the capsule endoscope, and 상기 어느 하나의 랜드마크가 제1 랜드마크인 경우, 상기 캡슐내시경의 현재 위치 정보 및 현재 포즈 정보를 기초로 상기 제1 랜드마크에서 제2 랜드마크로 이동하는 경로를 생성하는 단계If any one of the above landmarks is a first landmark, a step of generating a path for moving from the first landmark to the second landmark based on the current location information and current pose information of the capsule endoscope. 를 포함하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional upper navigation method of a capsule endoscope including a. 제1항에 있어서,In the first paragraph, 상기 캡슐내시경의 이동 경로를 생성하는 단계는,The step of generating the movement path of the above capsule endoscope is: 상기 캡슐내시경이 기설정된 랜드마크의 순서를 따라 순차적으로 이동하도록 상기 캡슐내시경의 이동 경로를 생성하는 것을 특징으로 하는 캡슐내시경의 3차원 위 내비게이션 방법.A three-dimensional navigation method for a capsule endoscope, characterized in that a movement path of the capsule endoscope is generated so that the capsule endoscope moves sequentially according to the order of preset landmarks. 제1항에 있어서,In the first paragraph, 상기 랜드마크는,The above landmarks are, 위분문(Cardia), 위저부(Fundus), 대만곡(greater curvature), 각(Angulus), 또는 유문(Pylorus) 중 적어도 하나의 위 내부의 영역을 포함하는 것을 특징으로 하는 캡슐내시경의 3차원 위 내비게이션 방법.A method for three-dimensional gastric navigation using a capsule endoscope, characterized in that it includes at least one region within the stomach among the cardia, fundus, greater curvature, angulus, and pylorus. 인체 내부에 삽입된 캡슐내시경으로부터 촬영된 영상을 획득하는 영상 수신부; An image receiving unit that acquires images captured from a capsule endoscope inserted into the human body; 상기 캡슐내시경에서 획득된 영상으로부터 랜드마크를 검출하고, 3차원으로 복원된 위 데이터로부터 상기 캡슐내시경의 위치 및 포즈 정보를 파악하는 연산부;A computational unit that detects landmarks from images acquired from the capsule endoscope and determines position and pose information of the capsule endoscope from the three-dimensionally restored data; 상기 캡슐내시경의 위치 정보 및 포즈 정보, 상기 랜드마크, 또는 3차원 위 내시경 지도 중 적어도 하나를 기초로 캡슐내시경의 이동 경로를 생성하는 탐색부; 및A navigation unit that generates a movement path of the capsule endoscope based on at least one of the position information and pose information of the capsule endoscope, the landmark, or a 3D upper gastrointestinal endoscope map; and 상기 탐색부가 생성한 이동 경로를 따라 이동하도록 상기 캡슐내시경을 제어하는 제어부;를 포함하는 것을 특징으로 하는 3차원 내비게이션 시스템.A three-dimensional navigation system, characterized by including a control unit that controls the capsule endoscope to move along the movement path generated by the search unit. 제 12 항에 있어서,In Article 12, 상기 제어부는,The above control unit, 외부 자기장을 이용하여, 상기 캡슐내시경의 구동을 제어하는 것을 특징으로 하는 3차원 내비게이션 시스템.A three-dimensional navigation system characterized by controlling the operation of the capsule endoscope using an external magnetic field.
PCT/KR2024/007432 2023-09-08 2024-05-31 Three-dimensional stomach navigation method and system for capsule endoscope Pending WO2025053376A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2023-0119607 2023-09-08
KR1020230119607A KR20250037631A (en) 2023-09-08 2023-09-08 3d stomach navigation method of capsule endoscope and system using the same

Publications (1)

Publication Number Publication Date
WO2025053376A1 true WO2025053376A1 (en) 2025-03-13

Family

ID=94924186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/007432 Pending WO2025053376A1 (en) 2023-09-08 2024-05-31 Three-dimensional stomach navigation method and system for capsule endoscope

Country Status (2)

Country Link
KR (1) KR20250037631A (en)
WO (1) WO2025053376A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120014242A (en) * 2009-03-26 2012-02-16 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 A system that steers the tip of the endoscope device toward one or more landmarks and provides visual guidance to assist the driver in finding the endoscope
US20130304446A1 (en) * 2010-12-30 2013-11-14 Elisha Rabinovitz System and method for automatic navigation of a capsule based on image stream captured in-vivo
WO2021141253A1 (en) * 2020-01-10 2021-07-15 주식회사 인트로메딕 System and method for identifying position of capsule endoscope on basis of position information about capsule endoscope
KR20220130495A (en) * 2021-03-18 2022-09-27 아주대학교산학협력단 Method and device for detecting movement status of capsule endoscope
US20230110263A1 (en) * 2021-10-08 2023-04-13 Cosmo Artificial Intelligence - Al Limited Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102224828B1 (en) 2019-03-29 2021-03-09 전남대학교산학협력단 Method for controlling motion of paramagnetism capsule endoscope
KR102595784B1 (en) 2020-11-11 2023-10-31 동국대학교 산학협력단 Magnetically controlled, ph sensor-assisted navigation capsule endoscopy and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120014242A (en) * 2009-03-26 2012-02-16 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 A system that steers the tip of the endoscope device toward one or more landmarks and provides visual guidance to assist the driver in finding the endoscope
US20130304446A1 (en) * 2010-12-30 2013-11-14 Elisha Rabinovitz System and method for automatic navigation of a capsule based on image stream captured in-vivo
WO2021141253A1 (en) * 2020-01-10 2021-07-15 주식회사 인트로메딕 System and method for identifying position of capsule endoscope on basis of position information about capsule endoscope
KR20220130495A (en) * 2021-03-18 2022-09-27 아주대학교산학협력단 Method and device for detecting movement status of capsule endoscope
US20230110263A1 (en) * 2021-10-08 2023-04-13 Cosmo Artificial Intelligence - Al Limited Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure

Also Published As

Publication number Publication date
KR20250037631A (en) 2025-03-18

Similar Documents

Publication Publication Date Title
KR102458587B1 (en) Universal device and method to integrate diagnostic testing into treatment in real-time
CN114332019B (en) Endoscopic image detection assistance system, method, medium, and electronic device
JP5676058B1 (en) Endoscope system and method for operating endoscope system
CN201870615U (en) Medical wide-angle speculum
Bao et al. A computer vision based speed estimation technique for localiz ing the wireless capsule endoscope inside small intestine
US20030045790A1 (en) System and method for three dimensional display of body lumens
US20050251017A1 (en) System and method registered video endoscopy and virtual endoscopy
CN108430373A (en) Apparatus and method for tracking the position of an endoscope within a patient
WO2016195401A1 (en) 3d glasses system for surgical operation using augmented reality
KR102625668B1 (en) A capsule endoscope apparatus and supporting methods for diagnosing the lesions
WO2015137741A1 (en) Medical imaging system and operation method therefor
WO2018097596A1 (en) Radiography guide system and method
WO2020204424A2 (en) Shape reconstruction device using ultrasonic probe, and shape reconstruction method
CN102160773B (en) In-vitro magnetic control sampling capsule system based on digital image guidance
WO2022114357A1 (en) Image diagnosis system for lesion
WO2022231329A1 (en) Method and device for displaying bio-image tissue
WO2014157796A1 (en) Endoscope system for diagnosis support and method for controlling same
WO2021215800A1 (en) Surgical skill training system and machine learning-based surgical guide system using three-dimensional imaging
WO2025053376A1 (en) Three-dimensional stomach navigation method and system for capsule endoscope
US20250009248A1 (en) Endoscopy support system, endoscopy support method, and storage medium
CN206659781U (en) A kind of capsule gastroscope
CN114451848B (en) Endoscope capsule track guiding method, device and system
CN113545732B (en) Capsule endoscope system
KR20200132174A (en) AR colonoscopy system and method for monitoring by using the same
WO2020138731A1 (en) Em sensor-based otolaryngology and neurosurgery medical training simulator and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24862959

Country of ref document: EP

Kind code of ref document: A1