[go: up one dir, main page]

WO2012014438A1 - Dispositif, procédé, et programme pour faciliter l'observation endoscopique - Google Patents

Dispositif, procédé, et programme pour faciliter l'observation endoscopique Download PDF

Info

Publication number
WO2012014438A1
WO2012014438A1 PCT/JP2011/004184 JP2011004184W WO2012014438A1 WO 2012014438 A1 WO2012014438 A1 WO 2012014438A1 JP 2011004184 W JP2011004184 W JP 2011004184W WO 2012014438 A1 WO2012014438 A1 WO 2012014438A1
Authority
WO
WIPO (PCT)
Prior art keywords
endoscope
tubular tissue
image
center line
displayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2011/004184
Other languages
English (en)
Japanese (ja)
Inventor
元中 季
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of WO2012014438A1 publication Critical patent/WO2012014438A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10124Digitally reconstructed radiograph [DRR]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30172Centreline of tubular or elongated structure
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the present invention relates to a technique for supporting endoscopic observation in an operation or examination under an endoscope inserted into a tubular tissue of a subject, and in particular, a virtual endoscope representing a tubular tissue of a subject.
  • the present invention relates to a technique for supporting endoscopic observation using a mirror image.
  • an endoscope in a tubular tissue is used by using a virtual endoscope image that generates an image similar to an endoscope from a three-dimensional volume image obtained by imaging with CT or the like.
  • a technology that supports the grasping of the position of a person has been proposed.
  • a virtual endoscopic image is generated from a three-dimensional image of a subject acquired in advance, and an error between the virtual endoscopic image and a real image captured by the endoscope is calculated and calculated.
  • the position of the viewpoint of the virtual endoscope is moved until the error is less than the allowable error, and the calculation of the error between the moved virtual endoscope image and the endoscope image is repeated, and the calculated error is less than the allowable error.
  • the soft tubular tissue such as the colon, a body position supine patient when virtual endoscopic image is captured, or when patient position when the endoscope is inserted is lateral position
  • the tubular tissue is deformed by the action of gravity for each different body position.
  • image matching is difficult.
  • the image of the tubular tissue since the similar structure is continuous in a section no curved portion or branch portion as characteristic of the tubular structure, by comparing the characteristics of the image it is difficult to match.
  • the present invention has been made in view of the above circumstances, upon observation of the tubular tissue in endoscopic inserted into the subject, more reliably it can grasp the position of the endoscope within the tubular tissue
  • An endoscope observation support apparatus includes a center line acquisition unit that acquires a center line of a tubular tissue of a subject from a previously acquired three-dimensional image of the subject, and an endoscope that is inserted into the tubular tissue.
  • Endoscopic image display means for displaying an endoscopic image photographed while moving along the longitudinal direction of the tubular tissue, and one characteristic region of the tubular tissue is displayed on the displayed endoscopic image.
  • Position determining means for determining the position of the endoscope as a reference position and setting a position corresponding to the one feature region on the center line, and the endoscope further moved from the reference position
  • a movement amount acquisition means for acquiring a movement amount and a traveling direction of the current position, and a position separated from the position corresponding to the one characteristic region by the acquired movement amount along the center line in the acquired traveling direction.
  • Calculate as position A standing position calculating means is characterized in that the index representing the calculated the current position and a current position display means for displaying on said center line.
  • the endoscope observation support method acquires a center line of a tubular tissue of a subject from a previously acquired three-dimensional image of the subject, and moves the endoscope inserted into the tubular tissue in the longitudinal direction of the tubular tissue.
  • An endoscope image taken while moving along the screen is displayed, and when one feature region of the tubular tissue is displayed in the displayed endoscope image, a reference position of the endoscope is input And setting a position corresponding to the one feature region on the center line, obtaining a movement amount and a traveling direction of the endoscope further moved from the reference position, and a position corresponding to the one feature region Then, a position that is separated by the acquired amount of movement in the acquired traveling direction along the center line is calculated as a current position, and an index representing the calculated current position is displayed on the center line. It is what.
  • An endoscope observation support program includes a computer, a center line acquisition unit that acquires a center line of a tubular tissue of a subject from a three-dimensional image of the subject that has been acquired in advance, and an internal portion inserted into the tubular tissue.
  • Endoscopic image display means for displaying an endoscopic image taken while moving the endoscope along the longitudinal direction of the tubular tissue, and one feature region of the tubular tissue in the displayed endoscopic image Is displayed, the position of the endoscope is determined as a reference position, and a position determination means for setting a position corresponding to the one feature region on the center line, and the position further moved from the reference position
  • the movement amount acquisition means for acquiring the movement amount and the traveling direction of the endoscope, and the position corresponding to the one feature region, the distance along the center line is separated by the acquired movement amount in the acquired traveling direction.
  • Position Current position calculating means for calculating as a standing position, characterized in that the functioning of the index representing the calculated the current position as the current position display means for displaying on said center line.
  • the tubular tissue is typified by a tubular internal organ such as the intestine or stomach if the subject is a human body, but may be anything as long as an endoscope can be inserted.
  • the feature region may be any region as long as the user has a form that can be distinguished from other regions in the tubular tissue, such as a bent portion of the intestine or a region in which a polyp is noticeable.
  • the areas can be specified and indicated by indexes of various shapes such as a circle, a rectangle, and a closed curved surface.
  • the characteristic region may be one of an anal canal, a spleen fold, and a liver fold.
  • the feature region may be either the larynx or the bronchial bifurcation.
  • the characteristic region may be either the pharynx or the cardia.
  • the reference position is a position used as a reference for calculating the position of the endoscope located in the tubular tissue, and the characteristic region in the tubular tissue is displayed in the endoscopic image. It means the position of the endoscope at the time.
  • This reference position is determined by storing the position of the endoscope with the user's mouse, keyboard or other input device when the feature area is displayed in the endoscopic image, Various methods can be used. For example, when the feature region in the tubular tissue is displayed in the endoscopic image, the user may determine the reference position by clicking a button provided on the screen with the mouse, and the interactive GUI may be used as the reference position.
  • the reference position may be determined by command input for ordering acquisition of the position.
  • the specification of the reference position is a position at which measurement of the movement amount and the movement direction along the longitudinal direction of the tubular tissue of the endoscope is started, and based on this, the movement amount along the longitudinal direction of the tubular tissue.
  • the reference position can be specified as the insertion depth of the endoscope by attaching a numerical value representing the length from the distal end of the endoscope on the outer surface of the probe exposed from the tubular tissue.
  • the reference position may be specified by an existing function of the endoscope that measures the insertion depth of the endoscope into the tubular tissue.
  • the “position” in “set a position corresponding to the one feature area on the center line” means a position on the center line in the vicinity of the feature area, for example, the virtual area when the feature area is displayed with a virtual endoscope. It may be the position of the viewpoint of the endoscope, may be the position of a point on the center line closest to the feature area, or may be an arbitrary position on the center line included in a predetermined range from the feature area.
  • various methods can be used for acquiring the movement amount and the traveling direction of the endoscope as long as the movement amount and the traveling direction moved along the tubular tissue of the endoscope can be acquired.
  • the movement amount acquisition means in the endoscope observation support apparatus of the present invention may acquire the movement amount and movement direction of the probe of the endoscope exposed outside the tubular tissue.
  • a scale can be provided on the probe of the endoscope or the coating film of the probe, and the portion of the probe exposed outside the tubular tissue can be read by the optical detection means.
  • a moving amount and a traveling direction of the probe by photographing with an optical photographing device arranged so that fixed point photographing is possible and reading the scale from the photographed image by a known method.
  • the user manually measures the amount and direction of movement of the probe, and the movement measured as input from the keyboard or other input device by the user The amount and direction of travel may be obtained.
  • you may measure the moving distance and the advancing direction along the longitudinal direction of the tubular structure
  • the index indicating the current position of the endoscope may be anything as long as it displays the position in an identifiable manner on the display screen.
  • the index may be a cross mark, or a known index that can indicate a position, such as a point, a circle, a rectangle, an arrow, or a closed curved surface.
  • the endoscopic observation support apparatus of the present invention includes a virtual endoscopic image generation unit that generates a virtual endoscopic image with a viewpoint moved along the center line, and the generated virtual endoscopic image.
  • the same feature region in the tubular tissue is displayed in both the virtual endoscopic image and the endoscopic image is acceptable as long as both images display the same feature region. Since the form of the feature area may be deformed depending on the body position of the subject, the form of the feature area on both images is strict if it can be recognized that the form of the feature area is the same in the feature area. There is no need to match.
  • the position of the endoscopic image by accurately associating the position of the viewpoint of the virtual endoscopic image when displaying the same feature region with the position of the imaging means such as the CCD of the endoscopic image, Preferably, when the reference position is input, it is preferable that the same feature region is represented by the same position or the same size on both images.
  • the viewpoint of the virtual endoscopic image is first moved in the virtual endoscopic image to thereby move the tubular tissue
  • the same feature area in the tubular tissue may be displayed in accordance with the movement of the endoscope in the endoscopic image, and the endoscopic image is first displayed in the endoscopic image.
  • the feature region in the tubular tissue may be displayed in accordance with the movement, and then the same feature region in the tubular tissue may be displayed by moving the viewpoint of the virtual endoscopic image in the virtual endoscopic image.
  • the same feature region in the tubular tissue may be displayed simultaneously with the image.
  • the display of the endoscopic image and the virtual endoscopic image may be performed on one display device or may be performed separately on a plurality of display devices as long as they are displayed in a comparable manner. .
  • the endoscope image and the virtual endoscopic image are arranged in the same physical location so that both images can be observed simultaneously.
  • the endoscopic image and the virtual endoscopic image can be switched and displayed as long as they can be compared with each other, and can be displayed in a superimposed manner.
  • the position determining means determines the reference position for each different characteristic region in the tubular tissue.
  • the endoscope observation support apparatus of the present invention receives a new reference position, acquires the movement amount and the traveling direction from the new reference position, and inputs the reference position in the acquired movement amount and the traveling direction. The position where the position representing the feature area when moved is calculated, and the calculated position is displayed.
  • the current position display unit may further display not only the calculated position index but also another index.
  • the present position display means may further display the tubular tissue together with the center line. This tubular tissue may be obtained by extracting from a three-dimensional image of a subject, or may be a tubular tissue model or the like.
  • the current position display means may further display a schematic endoscope having the position as a tip.
  • the current position display means may further display an index representing the feature area.
  • an endoscope image captured while moving an endoscope inserted into a tubular tissue along the longitudinal direction of the tubular tissue is displayed and displayed.
  • the reference position of the endoscope is input and a position corresponding to one feature region is set on the center line L, and the reference A movement amount and a traveling direction of the endoscope further moved from the position are acquired, and a position separated from the position corresponding to one feature region by the acquired movement amount in the traveling direction along the center line.
  • virtual endoscopic image generation means for generating a virtual endoscopic image with the viewpoint moved along the center line
  • the generated virtual endoscopic image Display means for displaying, wherein the determination of the reference position is performed when the feature region in the tubular tissue is displayed in both the displayed virtual endoscopic image and endoscopic image Since the reference position of the endoscope can be set more accurately, the position of the viewpoint of the virtual endoscope can be calculated more accurately. As a result, since the indicator on the center line is displayed at a more accurate position, the position of the endoscope can be grasped more accurately.
  • the tubular position is determined by the body position of the subject and the movement of the endoscope. It is possible to correct the difference between the position of the endoscope and the position of the viewpoint of the virtual endoscopic image caused by the expansion and contraction of the tissue, and the reference position of the endoscope can be set more accurately.
  • the position of the viewpoint of the virtual endoscope can be calculated. As a result, since the indicator on the center line is displayed at a more accurate position, the position of the endoscope can be grasped more accurately.
  • the endoscope observation support apparatus of the present invention further displays an indicator representing a typical endoscope or feature region having a tubular tissue or a viewpoint position as a tip together with the center line, the tubular tissue is more It is possible to accurately grasp the position of the endoscope in the interior.
  • FIG. 1 Hardware configuration diagram of an endoscope observation support system according to an embodiment of the present invention
  • FIG. 1 Functional block diagram of the endoscope observation support system in the first embodiment of the present invention
  • FIG. 1 A diagram schematically showing an example of the input screen when the endoscope and virtual endoscope represent the same feature area
  • FIG. 1 is a hardware configuration diagram showing an outline of the endoscope observation support system. As shown in the figure, this system includes an endoscope 1, a digital processor 2, a light source device 3, an endoscope image display 4, a modality 5, an image storage server 6, a position detection device 7, and an image processing workstation. A display (hereinafter referred to as WS display) 8 and an image processing workstation 9 are included.
  • WS display a display 8 and an image processing workstation 9 are included.
  • the endoscope 1 is a flexible endoscope for the large intestine and is inserted into the large intestine of the subject.
  • Light guided by the optical fiber from the light source device 3 is irradiated from the distal end portion 1A of the endoscope 1, and an image in the abdominal cavity of the subject is obtained by the imaging optical system of the endoscope 1.
  • the digital processor 2 converts the imaging signal obtained by the endoscope 1 into a digital image signal, corrects the image quality by digital signal processing such as white balance adjustment and shading correction, and then performs DICOM (Digital Imaging and Communications in Adds incidental information defined in the Medicine) standard and outputs endoscopic image data (I RE ).
  • DICOM Digital Imaging and Communications in Adds incidental information defined in the Medicine
  • the output endoscopic image data (I RE ) is transmitted to the image processing workstation 9 via the LAN according to a communication protocol compliant with the DICOM standard. Further, the digital processor 2 converts the endoscope image data (I RE ) into an analog signal and outputs the analog signal to the endoscope image display 4, and the endoscope image (I RE) is displayed on the endoscope image display 4. ) Is displayed. Since the acquisition of the imaging signal by the endoscope 1 is performed at a predetermined frame rate, the endoscope display (I RE ) is displayed as a moving image representing the abdominal cavity on the endoscope display 4. Furthermore, the endoscope 1 can also shoot still images according to user operations.
  • the modality 5 is a device that generates image data (V) of a three-dimensional medical image representing a region by imaging the region to be examined of the subject, and is a CT device here. Additional information defined by the DICOM standard is also added to the three-dimensional medical image data (V). In addition, the three-dimensional medical image data (V) is also transmitted to the image processing workstation 9 via the LAN according to a communication protocol compliant with the DICOM standard.
  • Image storage server 6 is connected via modality 5 and image processing workstation 9 and LAN, image of a medical image generated by image processing in the medical image data and image processing workstation 9 obtained by the modality 5
  • a computer that stores and manages data in an image database, and includes a large-capacity external storage device and database management software (for example, ORDB (Object Relational Database Management software)).
  • database management software for example, ORDB (Object Relational Database Management software)
  • the position detecting device 7 is composed of a known optical photographing device.
  • a support tool (not shown) that supports a part of the probe 1B of the endoscope exposed outside the tubular tissue of the subject so as to be horizontally movable in order to detect the position of the endoscope 1.
  • the position detection device 7 is arranged by a well-known fixing method at a position where the numerical value can be read, that is, a position 7A where the probe 1B supported horizontally by the support portion can be photographed at a fixed point.
  • the position detection device 7 is connected to the image processing workstation 9 via a known interface, and in response to an instruction from the image processing workstation 9, the numerical value on the probe 1B is optically photographed and the acquired signal value is obtained. Transmit to the image processing workstation 9.
  • the image processing workstation 9 is a computer having a well-known hardware configuration such as a CPU, a main storage device, an auxiliary storage device, an input / output interface, a communication interface, and a data bus, and includes an input device (pointing device, keyboard, etc.) WS display 8 is connected.
  • the image processing workstation 9 is connected to the digital processor 2, the modality 5, and the image storage server 6 via a LAN, and is connected to the position detection device 7 via a known interface.
  • the image processing workstation 9 is installed with a well-known operating system, various application software, and the like, and is also installed with an application for executing the endoscope observation support processing of the present invention. These software may be installed from a recording medium such as a CD-ROM, or may be installed after being downloaded from a storage device of a server connected via a network such as the Internet. Good.
  • FIG. 2 is a block diagram in which the endoscopic observation support system according to the first embodiment of the present invention is divided at the functional level.
  • the endoscope observation support system of the present specification includes an endoscope 1 and an endoscope image forming unit 2 that forms an endoscope image from an image signal captured from the endoscope 1.
  • An endoscopic image display means 4 (endoscopic image display 4) for displaying an endoscopic image formed in the endoscopic image forming unit 2, and a three-dimensional medical for forming a three-dimensional image of the subject.
  • a temporary endoscopic image 41 is generated by moving the viewpoint along the center line L.
  • the display means 8 (WS display 8) for displaying the generated virtual endoscopic image 41 and the center line L, and the endoscope 1 inserted into the tubular tissue of the subject.
  • Endoscopic image display means 4 (endoscopic image display 4) for displaying an endoscopic image 31 photographed along the longitudinal direction of the tissue, and one characteristic region in the tubular tissue in the endoscopic image 31
  • the position of the endoscope is determined as the reference position P1
  • the reference position determining means 13 for setting the position Q1 corresponding to the feature region R on the center line, and the endoscope 1 from the reference position P1
  • a current position calculating means 15 for calculating, and a current position display unit 16 for displaying the calculated display the index M representing the position Q2 of viewpoints was on the center line L.
  • each functional block shown in FIG. 2 will be described in detail.
  • symbol is attached
  • the function of the endoscope image forming unit 2 is realized by the digital processor of FIG. 1
  • the function of the three-dimensional medical image forming unit 5 is realized by the modality of FIG. 1
  • the function of the position detecting unit 7 is the position detecting device. 7 is realized.
  • the broken line frame indicates the image processing workstation 9, and the functions of the processing units in the broken line frame are realized by executing a predetermined program on the image processing workstation 9.
  • the 3D image acquisition unit 11 has a communication interface function for receiving the 3D medical image V from the 3D medical image forming unit 5 or the image management database 6 and storing it in a predetermined memory area of the image processing workstation 9.
  • the center line acquisition unit 17 extracts the tubular tissue of the subject and the center line L of the tubular tissue from the 3D image acquired by the 3D image acquisition unit 11.
  • Various known methods can be applied to the extraction of the tubular tissue and the extraction of the center line.
  • the center line acquisition unit 17 determines the viewpoint of the virtual endoscope when the virtual endoscopic image 41 is reconstructed so as to display the large intestine feature regions R1 and R2 from a predetermined direction.
  • the virtual endoscopic image generation unit 12 generates a virtual endoscopic image 41 with the viewpoint moved along the center line Lv acquired by the virtual endoscopic image generation unit 12 center line acquisition unit 17 by a known method. To do.
  • Endoscopic image display means 4 includes display means 8 (WS display 8) for displaying the generated virtual endoscopic image 41 and center line L, and the tubular tissue of the subject.
  • An endoscopic image 31 photographed along the longitudinal direction of the large intestine by the inserted endoscope 1 is displayed.
  • the endoscope image display 4 and the WS display 8 are arranged so that the same size displays are arranged side by side so that doctors can easily compare the two displays.
  • the endoscopic image 31 and the virtual endoscopic image 41 are displayed in windows of the same size on the respective displays 4 and 8 so that they can be easily compared.
  • the reference position determination unit 13 uses a pointing device or an input device such as a keyboard. Accepts input from the user. In response to this input, the reference position determining means 13 determines the position of the endoscope as the reference position P1.
  • the determined reference position P1 is specified as the insertion depth of the endoscope 1 by the movement amount acquisition means 14. Further, the reference position determination unit 13 as the position Q1 corresponding to the feature region R V1, sets the coordinate of the viewing point of the virtual endoscope when viewing feature region R V1 in virtual endoscopic image 31. Further, the reference position determination unit 13 stores the reference position P1 in a predetermined memory area of the image processing workstation 9 in association with the position Q1 corresponding to the feature area RV1 .
  • the movement amount acquisition means 14 functions as a communication interface for acquiring an image signal obtained by photographing a numerical value on the probe 1B of the endoscope through communication with the position detection device 7, and from the position detection device 7 at a predetermined time interval.
  • An image signal obtained by capturing the numerical value Sc recorded on the probe 1B of the endoscope at an arbitrary position at the position P of the endoscope is acquired, and a known image processing is performed on the image signal to recognize and acquire the numerical value Sc.
  • the movement amount acquisition means 14 is a numerical value acquired from the position detection device 7 when the endoscope is at an arbitrary position P2 from the numerical value Sc1 acquired from the position detection device 7 when the endoscope is at the reference position P1.
  • the movement amount Sq (Sq
  • ) and the traveling direction Sd up to Sc2 are acquired.
  • the traveling direction Sd of the endoscope is a reference position direction further enters the endoscope at the back of the large intestine from P1 (direction toward the cecum from the rectum), if positive is Sc1-Sc2, the direction of pulling the endoscope from the reference position P1 (cecum It is understood that the direction is from the head to the rectum.
  • Sd 0.
  • the current position display means 16 displays an index M representing the calculated current position Q2 on the center line L.
  • FIG. 6A is a diagram illustrating an example of display of the center line + and the index in the present embodiment. As shown in FIG. 6A, the current position display means 16 displays the center line L extracted by the center line acquisition means 17 on the WS display 8 and the index M representing the calculated current position Q2 on the displayed center line L. The guide image 51 is displayed.
  • the current position Q2 is virtual when displaying the characteristic region R V1 It represents the viewpoint position of the virtual endoscope that is separated from the viewpoint position of the endoscope by the amount of movement of the endoscope in the direction in which the endoscope has moved. Note that as the current position, a sign of the position of the virtual endoscope corresponding to each position other than the imaging unit 1A of the endoscope 1 can be displayed on the guide image 51.
  • the position corresponding to the imaging unit 1A of the endoscope 1 is displayed on the guide image 51 so as to be identifiable as the current position Q2. It is preferable to do.
  • a 3D medical image V is formed by imaging the tubular tissue of the subject by the 3D medical image forming unit 5.
  • the three-dimensional image acquisition unit 11 acquires the three-dimensional medical image V formed by the three-dimensional medical image forming unit 5 (S01).
  • the center line acquisition unit 17 acquires the center line L of the tubular tissue based on the 3D medical image V acquired by the 3D image acquisition unit 11 (S02).
  • the virtual endoscopic image generation unit 12 generates a virtual endoscopic image 41 in which the viewpoint is moved along the center line L acquired by the center line acquisition unit 17.
  • the viewpoint of the virtual endoscopic image is moved in accordance with a user movement instruction received by an input device such as a pointing device.
  • the display unit 8 displays the generated virtual endoscopic image 41 (S03).
  • the virtual endoscopic image generation unit 12 displays a virtual endoscopic image 41 representing the feature region R V1 in response to an input by the user input device.
  • the endoscope image forming unit 2 generates an endoscope image 31 at a predetermined frame rate from an image signal photographed along the longitudinal direction of the tubular tissue by the endoscope 1 inserted into the tubular tissue of the subject.
  • the endoscope image 31 formed repeatedly is displayed on the endoscope image display 4 in real time as a through moving image (S04).
  • FIG. 4 is a diagram for explaining a method of calculating the current position in the present embodiment.
  • Figure 5 right, large intestine endoscope 1 and colon C R of the object represents schematically, the endoscope 1 along the path represented by the dashed line the imaging unit 1A of the endoscope as the tip C Move R.
  • FIG. 5 shows CV and the center line L of the large intestine subject obtained from tomography by a modality such as a CT apparatus, and the viewpoint of the virtual endoscope moves on the center line L.
  • FIG. 5 shows the positions E1, E2 of the imaging unit 1A of the endoscope that moves along the length directions of the large intestine C R and C V and the large intestine, and the imaging unit 1A of the endoscope at the position E1.
  • the position detection device 7 is at a predetermined position 7A, the position P1 of the endoscope probe 1B is fixed, and when the endoscope imaging unit 1A is at the position E2, the position detection device 7 is fixed at the predetermined position 7A.
  • position P2 probe 1B of the endoscope is, position Q1 corresponding to the feature region R V1, showing the relationship between the current position Q2, which is moved from the feature region R V1.
  • FIG. 5 the position P1 ′ of the probe 1B of the endoscope that is fixed-point photographed at the predetermined position 7A by the position detection device 7 when the imaging unit 1A of the endoscope is at the position E1 ′, A position P2 ′ of the probe 1B of the endoscope that is fixed-point imaged at the predetermined position 7A by the position detection device 7 when the imaging unit 1A of the endoscope is at the position E2 ′, a position Q1 ′ corresponding to the feature region R V2 ; A relationship with the current position Q2 ′ of the virtual endoscopic image 41 moved from the feature region RV2 is shown.
  • FIG. 4 shows a state where the curved region of the large intestine is the feature region R1, and the feature regions R E1 and R V1 are displayed in both the endoscopic image and the virtual end
  • a user such as a doctor has a feature region R E1 displayed on the endoscopic image 31 and a feature region R V1 displayed on the virtual endoscopic image 41 having substantially the same size and the same arrangement.
  • the endoscope 1 is moved so that the displayed state is obtained.
  • the imaging unit 1A (the endoscope 1 of the endoscope 1) of the endoscope 1 is displayed.
  • Tip is the state arranged at E1
  • the viewpoint of the virtual endoscope is the state arranged at Q1.
  • the endoscopic image 31 and the virtual endoscopic image 41 have the same feature areas R E1 and R V1 displayed with substantially the same size and the same arrangement, and the position E1 of the imaging unit 1A of the endoscope.
  • the viewpoint position Q1 of the virtual endoscope represents substantially the same point in the large intestine C R and C V.
  • the user uses an input device such as a mouse.
  • the pointer Ptr is moved and a part of the feature region R V1 of the virtual endoscopic image is clicked.
  • the reference position determination unit 13 receives an instruction to determine the reference position P1 (Y in S05), and the reference position determination unit 13 thereby determines the reference position P1. Further, the reference position determination unit 13 acquires the coordinates of the position designated by the input of the reference position determination unit 13, and uses the coordinates of the viewpoint of the virtual endoscope for the feature region R V1 closest to the acquired coordinates as the feature.
  • the coordinates of the position Q1 corresponding to the area R1 are set, and the reference position P1 and the position Q1 corresponding to the feature area R1 are associated with each other and stored in a predetermined memory area of the image processing workstation 9. Based on this determination, the movement amount acquisition unit 14 acquires the insertion depth of the endoscope at the reference position P1, and the current position calculation unit 15 acquires the coordinates of the position Q1 corresponding to the feature region R V1 (S06). .
  • the feature region R V1 closest to the coordinates of the designated position can be specified by various known methods.
  • the distance between the coordinates of the viewpoint of the endoscope corresponding to the plurality of stored feature regions R V1 and R V2 and the coordinates of the designated position is calculated, and the coordinate having the shortest distance is calculated as the feature region V1.
  • the characteristic region R V is present some or all of the coordinates of the specified position in the predetermined range may be a closest feature region R V.
  • the movement amount acquisition unit 14 determines the insertion length of the endoscope at the reference position P1 of the endoscope at the position 7A for fixed point imaging according to the determination of the reference position determination unit 13. Obtained as the upper numerical value Sc1. That is, as shown in FIG. 5, the reference position P1 where the position E1 of the imaging unit 1A of the endoscope 1 and the position of the viewpoint Q1 of the virtual endoscope coincide with each other in the large intestine is inserted into the large intestine of the endoscope 1. It is specified by the depth Sc1.
  • the movement amount acquisition means 14 acquires the movement amount and the traveling direction along the length direction of the large intestine of the endoscope 1 from the reference position P1 (S07). That is, as shown in FIG. 5, after the acquisition of the reference position P1 from the reference position P1, the movement amount Sq and the traveling direction Sd from the reference position P1 to the position P2 of the endoscope 1 moved by the user or the like are acquired.
  • the movement amount acquisition means 14 acquires an image signal obtained by photographing the numerical value Sc2 written on the outer surface of the probe 1B of the endoscope at an arbitrary position P2 of the endoscope from the position detection device 7 at a predetermined time interval. A known image processing is performed on the image signal to obtain a numerical value Sc2. That is, the position P2 where the endoscope 1 has moved from the reference position P1 is specified by the insertion depth Sc2 of the endoscope 1 into the large intestine. Then, the movement amount acquisition unit 14 acquires the movement amount Sq and the traveling direction Sd of the endoscope from the reference position P1 based on the difference between Sc2 and Sc1 as described above. In FIG. 5, the movement amount Sq of the endoscope is
  • Current position calculating means 15 obtains the coordinate position Q1 corresponding to the feature region R V1, along from the position Q1 corresponding to the feature region to the center line L of the large intestine, apart movement amount Sq acquired in the traveling direction Sd The current position Q2 is calculated (S08).
  • the current position display means 16 displays a guide image 51 displayed on the center line L on which the center line L extracted by the center line acquisition means 17 and the index M representing the calculated viewpoint position Q2 are displayed on the WS display 8. It is displayed (S09). 6A shows an example in which an arrow (index) M indicating the current position Q2 is displayed on the guide image 51.
  • FIG. 1 shows an example in which an arrow (index) M indicating the current position Q2 is displayed on the guide image 51.
  • the image processing workstation 9 repeats the processes from S05 to S10 unless an operation for instructing the end of observation is performed (No in S11).
  • the determination of the reference position P1 is performed. Wait for instructions (S05). If there is no instruction to determine the reference position P1 (N in S05) and the endoscope reference position P1 has already been determined at least once (N in S10), the processes from S07 to S09 are performed. As a result, the guide image 51 displays an index M representing the current position on the center line L of the guide image 51 in conjunction with the movement of the endoscope 1 in time.
  • the reference position determination means 13 determines a new reference position P1 ′ as described above. Further, the reference position determining unit 13 sets the coordinates of the viewpoint when the feature region R V2 is represented in the virtual endoscopic image 41 as the coordinates of the position Q1 ′ corresponding to the new feature region R V2 .
  • the movement amount acquisition means 14 acquires the insertion depth Sc1 ′ of the endoscope at the reference position P1 ′, and the current position calculation means 15 acquires the coordinates of the position Q1 ′ corresponding to the new feature region R V2 (S06). ).
  • the position E1 ′ of the distal end portion of the endoscope 1 is in a state of being arranged at substantially the same position as the position Q1 ′ of the viewpoint of the virtual endoscope displaying the characteristic region RV2 .
  • the movement amount acquisition unit 14 is provided on the outer surface of the probe 1B when the imaging unit 1A of the endoscope is arranged at E2 ′.
  • a numerical value Sc2 ′ representing the insertion depth of the endoscope at the position P2 ′ is acquired.
  • the movement amount Sq and the traveling direction Sd from the new reference position P1 ′ are acquired (S07), and only the acquired movement amount Sq and the traveling direction Sd are separated from the position Q1 ′ corresponding to the new feature region RV2.
  • the current position Q2 ′ is calculated (S08), and the calculated current position Q2 ′ is displayed (S09).
  • an endoscope image taken along the longitudinal direction of the tubular tissue is displayed and displayed by the endoscope inserted into the tubular tissue of the subject.
  • the reference position of the endoscope is determined and the position Q1 corresponding to the feature region on the center line L is set, and is further moved from the reference position P1.
  • the movement amount Sq and the traveling direction Sd of the endoscope are acquired, and the current position Q2 that is separated from the position Q1 corresponding to the feature region along the center line L by the acquired movement amount Sq in the traveling direction Sd.
  • the shift between the position E2 of the photographing unit 1A of the endoscope 1 and the current position Q2 is corrected. Since the endoscope reference position P1 ′ can be switched and determined more accurately, the viewpoint position Q2 ′ of the virtual endoscope can be calculated more accurately.
  • the position of the endoscope 1 position E1 the current position Q2 of the imaging section 1A deviation, and the route of the endoscope is moved along the longitudinal direction of the tubular tissue does not necessarily coincide with the center line of the tubular tissue, the subject tubular tissue stretch by the movement of the posture and the endoscope, the deformation in which occurs inevitably by, by determining the reference position for each characteristic region, an indication of the center line is displayed in a more accurate position Therefore, the position of the endoscope can be grasped more accurately.
  • FIG. 6B is a diagram showing a modification of the display of the current position Q2 on the guide image 51.
  • the current position display means 16 may further display the tubular tissue 52 together with the center line L.
  • This tubular tissue may be obtained by extracting from a three-dimensional image of a subject, or may be a tubular tissue model or the like.
  • the tubular tissue can be displayed by various known methods. For example, an arbitrary transparency is set as appropriate, and an arbitrary color such as multicolor or black and white is displayed by a known display method such as a volume rendering method.
  • the current position display means 16 may further display a schematic endoscope 53 having the current position Q2 as a tip.
  • the current position display unit 16 may further display an index representing the feature region R.
  • an index representing the feature region R an index representing the position Q1 corresponding to the feature region RV1 is displayed as a point.
  • the reference position determination unit 13 receives the specification of the feature region R V1 on the virtual endoscopic image by the user, and thereby inputs the reference position P1 of the endoscope 1 and the feature region R of the tubular tissue.
  • the input operation becomes simpler, so that the endoscopic diagnosis support of the present invention can be performed more efficiently.
  • the R V2 Corresponding positions Q1 and Q2 may be displayed in a selectable manner, and a position Q1 representing the characteristic region RV1 corresponding to the reference position P1 of the endoscope 1 may be set by an input device such as a mouse.
  • the reference position determining means 13 if associated to the position Q1 corresponding to the feature region R V1 of the reference position P1 and the tubular tissue, the characteristic region of the reference position P1 and the tubular tissue it may be carried out and determination of the association and the reference position P1 and the position Q1 corresponding to R V1 to different timings, which may be performed first.
  • the reference position determination button is displayed on the screen of the WS display 8 so as to be selectable, and the reference position determination means 13 acquires the reference position P1 of the endoscope 1 by the selection operation of the reference position determination button by the user's mouse or the like.
  • the feature region R V1 specified at the reference position P1 of the endoscope 1 or the point Q1 on the center line L corresponding to the feature region R V1 is separately input by a user input device or the like. acquired by, or in association with the position Q1 indicating the feature region R V1 of the reference position P1 and the tubular tissue.
  • the coordinates of the viewpoints Q1 and Q1 ′ of the virtual endoscope set in advance are associated as the positions corresponding to the feature regions R V1 and R V2 , respectively, but the reference position P1 of the endoscope the position Q1 corresponding to the feature region R V1 of the tubular tissue may be those obtaining in association with each other, it can be acceptable error from the feature region R V1, R V2 instead on the position of the viewing point of the virtual endoscopy and Positions in the vicinity of the feature area within a predetermined range can also be associated.
  • the reference position P1 of the endoscope and the position Q1 corresponding to the feature region R V1 of the tubular tissue can be associated with each other, the points Q1, Q2 on the center line L representing the feature regions R V1 , R V2 Is not necessarily extracted in advance, the feature region R V1 is displayed in the endoscope image 31 and the reference position P1 of the endoscope 1 is input, and then the position Q1 corresponding to the feature region R V1 is extracted. May be.
  • the viewpoint of the virtual endoscope is manually moved to display the feature region in the virtual endoscope image 31.
  • the position of the viewpoint of the virtual endoscope when RV1 is displayed may be set, or the center line L may be displayed and set as the position selected by the user using the input device on the center line.
  • the virtual endoscope image generating means 12 that generates the virtual endoscope image 41 with the viewpoint moved along the center line L, and the generated virtual endoscope Display means 8 for displaying an image 41, and the reference position P1 is determined by displaying the feature region R V1 in the tubular tissue in both the displayed virtual endoscopic image 41 and endoscopic image 31. Since the position P1 of the viewpoint of the virtual endoscope 31 coincides with the position E1 of the imaging unit 1A of the endoscope with high accuracy, the position P1 of the viewpoint of the virtual endoscope 31 is based on the viewpoint position P1.
  • the calculated viewpoint position P2 accurately represents the position E2 of the imaging unit 1A of the endoscope, and the position 1A (the position of the distal end) of the imaging unit of the endoscope can be grasped more accurately.
  • the reference position P1 of the endoscope can be set more accurately, and the viewpoint position Q2 of the virtual endoscope can be calculated more accurately.
  • the index M on the center line is displayed at a more accurate position, the position of the further moved endoscope can be grasped more accurately.
  • the same feature region is represented by the same position or the same size on both the virtual endoscopic image 41 and the endoscopic image 31, the above effect is further remarkable.
  • the viewpoint of the virtual endoscopic image is first moved in the virtual endoscopic image to thereby move the tubular tissue
  • the same feature area in the tubular tissue may be displayed in accordance with the movement of the endoscope in the endoscopic image, and the endoscopic image is first displayed in the endoscopic image.
  • the feature region in the tubular tissue may be displayed in accordance with the movement, and then the same feature region in the tubular tissue may be displayed by moving the viewpoint of the virtual endoscopic image in the virtual endoscopic image.
  • the same feature region in the tubular tissue may be displayed simultaneously with the image.
  • the reference position P1 of the endoscope 1 is determined when the feature region of the tubular tissue is displayed in the endoscopic image 31, and corresponds to a plurality of feature regions R V1 and R V2 extracted in advance on the WS display 8.
  • the position corresponding to the feature region RV1 may be selected by an input device such as a mouse, and the reference position P1 and the position Q1 corresponding to the feature region RV1 may be associated with each other.
  • the movement amount acquisition unit 14 acquires a numerical value attached to the outer surface of the probe 1B of the endoscope 1 and measures the insertion depth of the endoscope 1 into the tubular tissue. Since the movement amount and movement direction of the endoscope are acquired, the movement amount and movement direction of the endoscope can be acquired with a simple device and method. In addition, the acquisition method of a movement amount and a movement direction is not restricted to the method of this embodiment, As long as the movement direction and movement amount to the longitudinal direction of the tubular structure
  • the index M representing the viewpoint Q2 of the virtual endoscope may be anything as long as it displays the position in an identifiable manner.
  • the index may be a cross mark, or a known index that can indicate a position, such as a point, a circle, a rectangle, an arrow, or a closed curved surface.
  • the index M may represent an arbitrary point on the center line L with a range of errors expected along the center line L around the point Q2 representing the feature region on the center line L. Good.
  • the modality 5 can use a modality capable of acquiring volume data capable of reconstructing a virtual endoscopic image, such as an MRI apparatus, in addition to the above CT apparatus.
  • the position of the viewpoint of the virtual endoscope is calculated in synchronization with the movement of the endoscope 1, and the center line of the guide image 51 is calculated. Since an index M indicating the position of the viewpoint is displayed on L, a user such as a doctor grasps the route of the large intestine from the center line of the guide image 51, and the position of the endoscope in the large intestine by the index M above it.
  • the endoscope can be operated while referring to the above as appropriate. For this reason, the approach of the endoscope 1 to the curved portion or the target diagnosis site can be dynamically and accurately captured.
  • the tubular tissue may be anything as long as an endoscope can be inserted, and the feature region may be anything as long as the user has a form that can be distinguished from other regions in the tubular tissue.
  • the feature region can be indicated by various methods such as a circle, a rectangle, an arrow, and a closed curved surface.
  • the tubular tissue may be a large intestine, and the characteristic region may include any of an anal canal, a spleen fold, and a liver fold.
  • the tubular tissue may be a bronchus, and the characteristic region may include one of a larynx and a bronchial bifurcation.
  • the tubular tissue may be an esophagus, and the characteristic region may include one of a pharynx and a cardia. Moreover, it is good also considering the area
  • the user designates a plurality of feature regions R V1 and R V2 in the tubular tissue on the virtual endoscopic image with an input device such as a mouse, whereby a plurality of feature regions R V1 and Although R V2 are previously extracted, the characteristic region of the tubular tissue R 1, R 2, R 3 , ... obtained from the shape model and the 3-dimensional image data of the center line of the predefined tubular tissue R n by performing matching between the center line L, a plurality of characteristic regions R 1, R 2 on the geometric model, R 3, ... feature region on the center line L and the corresponding R n R V1, R V2, R V3 ,... R Vn may be automatically extracted.
  • a warning unit 18 may be added to the endoscope observation support apparatus according to the first embodiment.
  • the warning unit 18 is a processing unit mounted on the image processing workstation 9, and is designated in advance as the virtual endoscope viewpoint position Q2 on the center line after S08 of the first embodiment shown in FIG.
  • a warning WM is output on the guide image 51. Is.
  • the warning WM blinks the index M indicating the position Q2 of the viewpoint of the virtual endoscope or changes the color of the index M to an identifiable color.
  • the method for outputting the warning to the outside may be a method for outputting a warning sound or sound in addition to the above method, or both warning message superimposition display and warning sound may be output. .

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Endoscopes (AREA)

Abstract

[Problème] Fixer la position d'un endoscope inséré dans un tissu tubulaire d'un sujet dans le tissu tubulaire. [Solution] La ligne centrale (L) d'un tissu tubulaire d'un sujet est acquise à partir d'une image tridimensionnelle précédemment acquise du sujet, une image endoscopique acquise pendant que l'endoscope inséré dans le tissu tubulaire est déplacé le long de la direction longitudinale du tissu tubulaire est affichée, lorsqu'une région caractéristique du tissu tubulaire est affichée dans l'image endoscopique affichée, la position de référence de l'endoscope est entrée et une position (Q1) correspondant à la région caractéristique est définie sur la ligne centrale (L), la quantité de mouvement et la direction de déplacement de l'endoscope déplacé plus loin de la position de référence est acquise, la position distante de la quantité de molécule acquise dans la direction de déplacement acquise le long de la ligne centrale de la position (Q1) correspondant à la région caractéristique est calculée comme étant une position actuelle (Q2), et un indicateur (M) indiquant la position actuelle calculée (Q2) est affiché sur la ligne centrale (L).
PCT/JP2011/004184 2010-07-28 2011-07-25 Dispositif, procédé, et programme pour faciliter l'observation endoscopique Ceased WO2012014438A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-168932 2010-07-28
JP2010168932A JP2012024518A (ja) 2010-07-28 2010-07-28 内視鏡観察を支援する装置および方法、並びに、プログラム

Publications (1)

Publication Number Publication Date
WO2012014438A1 true WO2012014438A1 (fr) 2012-02-02

Family

ID=45529671

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/004184 Ceased WO2012014438A1 (fr) 2010-07-28 2011-07-25 Dispositif, procédé, et programme pour faciliter l'observation endoscopique

Country Status (2)

Country Link
JP (1) JP2012024518A (fr)
WO (1) WO2012014438A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014061566A1 (fr) * 2012-10-16 2014-04-24 オリンパス株式会社 Appareil d'observation, dispositif d'aide à l'observation, procédé et programme d'aide à l'observation
US8894566B2 (en) 2012-03-06 2014-11-25 Olympus Medical Systems Corp. Endoscope system
US9295372B2 (en) 2013-09-18 2016-03-29 Cerner Innovation, Inc. Marking and tracking an area of interest during endoscopy
EP4154795A4 (fr) * 2020-05-21 2023-07-12 NEC Corporation Dispositif de traitement d'image, procédé de commande, et support de stockage

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103764041B (zh) * 2012-08-08 2015-12-09 株式会社东芝 医用图像诊断装置、图像处理装置以及图像处理方法
WO2014028394A1 (fr) 2012-08-14 2014-02-20 Intuitive Surgical Operations, Inc. Systèmes et procédés d'alignement de systèmes de vision multiples
WO2014171391A1 (fr) 2013-04-15 2014-10-23 オリンパスメディカルシステムズ株式会社 Système d'endoscope
JP6206869B2 (ja) * 2013-05-28 2017-10-04 国立大学法人名古屋大学 内視鏡観察支援装置
WO2018179991A1 (fr) 2017-03-30 2018-10-04 富士フイルム株式会社 Système d'endoscope et son procédé de fonctionnement
US12082770B2 (en) 2018-09-20 2024-09-10 Nec Corporation Location estimation apparatus, location estimation method, and computer readable recording medium
US11229492B2 (en) 2018-10-04 2022-01-25 Biosense Webster (Israel) Ltd. Automatic probe reinsertion
EP3870063A1 (fr) * 2018-10-26 2021-09-01 Koninklijke Philips N.V. Guidage de navigation ultrasonore intraluminale et dispositifs, systèmes et procédés associés
WO2025027750A1 (fr) * 2023-07-31 2025-02-06 オリンパスメディカルシステムズ株式会社 Procédé d'aide à l'inspection endoscopique, dispositif d'aide à l'inspection endoscopique et programme

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002200030A (ja) * 2000-12-27 2002-07-16 Olympus Optical Co Ltd 内視鏡位置検出装置
JP2002345725A (ja) * 2001-05-22 2002-12-03 Olympus Optical Co Ltd 内視鏡システム
WO2007129493A1 (fr) * 2006-05-02 2007-11-15 National University Corporation Nagoya University Dispositif permettant d'observation d'une image médicale
JP2009125394A (ja) * 2007-11-26 2009-06-11 Toshiba Corp 血管内画像診断装置及び血管内画像診断システム
JP2009279249A (ja) * 2008-05-23 2009-12-03 Olympus Medical Systems Corp 医療機器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002200030A (ja) * 2000-12-27 2002-07-16 Olympus Optical Co Ltd 内視鏡位置検出装置
JP2002345725A (ja) * 2001-05-22 2002-12-03 Olympus Optical Co Ltd 内視鏡システム
WO2007129493A1 (fr) * 2006-05-02 2007-11-15 National University Corporation Nagoya University Dispositif permettant d'observation d'une image médicale
JP2009125394A (ja) * 2007-11-26 2009-06-11 Toshiba Corp 血管内画像診断装置及び血管内画像診断システム
JP2009279249A (ja) * 2008-05-23 2009-12-03 Olympus Medical Systems Corp 医療機器

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8894566B2 (en) 2012-03-06 2014-11-25 Olympus Medical Systems Corp. Endoscope system
WO2014061566A1 (fr) * 2012-10-16 2014-04-24 オリンパス株式会社 Appareil d'observation, dispositif d'aide à l'observation, procédé et programme d'aide à l'observation
JP2014079377A (ja) * 2012-10-16 2014-05-08 Olympus Corp 観察装置、観察支援装置、観察支援方法及びプログラム
US9295372B2 (en) 2013-09-18 2016-03-29 Cerner Innovation, Inc. Marking and tracking an area of interest during endoscopy
US9805469B2 (en) 2013-09-18 2017-10-31 Cerner Innovation, Inc. Marking and tracking an area of interest during endoscopy
EP4154795A4 (fr) * 2020-05-21 2023-07-12 NEC Corporation Dispositif de traitement d'image, procédé de commande, et support de stockage

Also Published As

Publication number Publication date
JP2012024518A (ja) 2012-02-09

Similar Documents

Publication Publication Date Title
WO2012014438A1 (fr) Dispositif, procédé, et programme pour faciliter l'observation endoscopique
JP5380348B2 (ja) 内視鏡観察を支援するシステムおよび方法、並びに、装置およびプログラム
JP5551957B2 (ja) 投影画像生成装置およびその作動方法、並びに投影画像生成プログラム
JP5421828B2 (ja) 内視鏡観察支援システム、並びに、内視鏡観察支援装置、その作動方法およびプログラム
CN106659373B (zh) 用于在肺内部的工具导航的动态3d肺图谱视图
JP5535725B2 (ja) 内視鏡観察支援システム、並びに、内視鏡観察支援装置、その作動方法およびプログラム
JP5918548B2 (ja) 内視鏡画像診断支援装置およびその作動方法並びに内視鏡画像診断支援プログラム
JP5504028B2 (ja) 観察支援システムおよび方法並びにプログラム
JP6254053B2 (ja) 内視鏡画像診断支援装置、システムおよびプログラム、並びに内視鏡画像診断支援装置の作動方法
JP5369078B2 (ja) 医用画像処理装置および方法、並びにプログラム
JP5961504B2 (ja) 仮想内視鏡画像生成装置およびその作動方法並びにプログラム
JP2010517632A (ja) 内視鏡の継続的案内のためのシステム
JP2012200403A (ja) 内視鏡挿入支援装置およびその動作方法、並びに内視鏡挿入支援プログラム
JP2015083040A (ja) 画像処理装置、方法、及びプログラム
CN111093505B (zh) 放射线拍摄装置以及图像处理方法
JP2012165838A (ja) 内視鏡挿入支援装置
JP2014064722A (ja) 仮想内視鏡画像生成装置および方法並びにプログラム
US20230419517A1 (en) Shape measurement system for endoscope and shape measurement method for endoscope
JP6145870B2 (ja) 画像表示装置および方法、並びにプログラム
JP2002253480A (ja) 医療処置補助装置
JP5554028B2 (ja) 医用画像処理装置、医用画像処理プログラム、及びx線ct装置
JP4445792B2 (ja) 挿入支援システム
JP5366713B2 (ja) 消化管画像表示装置及び消化管画像データ表示用制御プログラム
JP7609278B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP4190454B2 (ja) 挿入支援装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11812040

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11812040

Country of ref document: EP

Kind code of ref document: A1