[go: up one dir, main page]

WO2009081669A1 - Dispositif de traitement d'image et programme de traitement d'image - Google Patents

Dispositif de traitement d'image et programme de traitement d'image Download PDF

Info

Publication number
WO2009081669A1
WO2009081669A1 PCT/JP2008/070783 JP2008070783W WO2009081669A1 WO 2009081669 A1 WO2009081669 A1 WO 2009081669A1 JP 2008070783 W JP2008070783 W JP 2008070783W WO 2009081669 A1 WO2009081669 A1 WO 2009081669A1
Authority
WO
WIPO (PCT)
Prior art keywords
tube
movement
image
motion
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2008/070783
Other languages
English (en)
Japanese (ja)
Inventor
Takehiro Matsuda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corp filed Critical Olympus Corp
Publication of WO2009081669A1 publication Critical patent/WO2009081669A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/042Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by a proximal camera, e.g. a CCD camera
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
    • A61B5/061Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body
    • A61B5/062Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body using magnetic field

Definitions

  • the present invention relates to an image processing apparatus and an image processing program for processing time-series images in a tube, which are picked up by an image pickup device that moves inside the tube.
  • a medical imaging apparatus that sequentially captures time-series images inside a tube while moving inside the tube.
  • a capsule endoscope When a capsule endoscope is swallowed from a patient's mouth and introduced into the body, it sequentially captures images while moving inside the tube by a peristaltic motion, etc., and transmits them to an external receiving device. To be discharged.
  • the patient In the examination by the capsule endoscope, the patient can freely move by carrying the receiving device.
  • the doctor confirms the time-series in-pipe images received by the external receiving device on a diagnostic workstation, etc., and if he finds the affected area, insert the medical treatment tool into the patient's body again if necessary. Or incising the patient's body to perform medical procedures such as tissue collection, hemostasis, and excision of the affected area.
  • the position of the imaging device at the time of imaging is the affected part. It corresponds to the vicinity of. Therefore, if the position of the imaging device at the time of imaging can be grasped, the position of the affected part can be estimated.
  • an antenna for data transmission built in the capsule endoscope and a plurality of receiving antennas arranged at predetermined positions on the body surface are used.
  • a method for estimating the position and orientation of an antenna of a capsule endoscope based on the reception intensity of an electromagnetic wave signal is disclosed (see Patent Document 1).
  • Patent Document 1 when the position of the capsule endoscope is estimated based on the reception intensity of the electromagnetic wave signal, there are the following problems. That is, there has been a problem that the reception intensity fluctuates before the electromagnetic wave signal transmitted from the antenna of the capsule endoscope existing inside the body is received by the reception antenna outside the body. For example, the attenuation rate of the electromagnetic wave signal varies depending on the type of internal organ such as an organ, so that the degree of fluctuation in received intensity varies depending on the internal organs existing between the capsule endoscope antenna and the external receiving antenna. End up.
  • the reception intensity may fluctuate under the influence of a change in the environment around the patient under examination, for example, noise due to electronic equipment or the like in the patient's action range, or a positional deviation of the receiving antenna.
  • a change in the environment around the patient under examination for example, noise due to electronic equipment or the like in the patient's action range, or a positional deviation of the receiving antenna.
  • the information about the position of the imaging device at the time of imaging is not the spatial coordinates in the body, but how much along the tube from the entrance or exit of the tube or the start position or end position of a specific organ.
  • Information about whether the image was taken when the distance was moved is useful. For example, in an organ that changes its shape in the body, such as the small intestine, even if the position of the imaging device at the time of imaging is grasped by the coordinates in the body, if the organ is deformed, the specified position and the actual position of the affected part No longer match. On the other hand, if the position of the imaging device at the time of imaging is grasped by the distance from the base point such as the entrance of the tube, the position of the affected part can be known even when the organ is deformed.
  • the present invention has been made in view of the above, and provides an image processing device and an image processing program capable of calculating the amount of movement of the imaging device moving in the tubular lumen in the body toward the deep tube portion. With the goal.
  • an image processing apparatus that processes time-series tube images taken by an image pickup device that moves inside a tube in the body.
  • the image pattern determining means for determining the attitude of the imaging device with respect to the tube and the relative movement of the imaging device with respect to the subject when the image in the tube is captured, and the image pattern determining means
  • a movement amount calculation means for calculating a movement amount of the imaging device toward the deep tube portion using the information on the attitude of the imaging device and the relative movement of the imaging device.
  • the image processing program provides a computer that processes a time-series tube image captured by an imaging device that moves in a tube in the body, and the tube when the tube image is captured.
  • An image pattern determining step for determining a posture of the imaging device with respect to the sky and a relative movement of the imaging device with respect to a subject; a posture of the imaging device determined by the image pattern determining step; and a relative relationship of the imaging device
  • a movement amount calculating step of calculating a movement amount of the imaging apparatus toward the deep tube portion of the imaging apparatus using the information on how to move.
  • the image processing device of the present invention by determining the attitude of the imaging device and the relative movement of the imaging device with respect to the subject using the time-series tube images captured by the imaging device, The amount of movement toward the deep tube portion can be calculated easily.
  • FIG. 1 is a schematic diagram illustrating an overall configuration of an image processing acquisition system including an image processing apparatus.
  • FIG. 2 is a block diagram illustrating a functional configuration of the image processing apparatus.
  • FIG. 3 is an overall flowchart illustrating a calculation processing procedure performed by the calculation unit of the image processing apparatus.
  • FIG. 4 is a flowchart showing a detailed processing procedure of the deep tube detection processing.
  • FIG. 5 is a flowchart showing a detailed processing procedure of the image motion determination process.
  • FIG. 6 is a diagram illustrating an example of a pre-time-series image.
  • FIG. 7 is a diagram illustrating an example of the processing target image.
  • FIG. 8 is a diagram showing motion vectors calculated based on the setting area of the time-series pre-image in FIG.
  • FIG. 9 is a flowchart showing a detailed processing procedure of the parallel movement determination processing.
  • FIG. 10 is a flowchart showing a detailed processing procedure of subject direction movement determination processing.
  • FIG. 11 is a flowchart showing a detailed processing procedure of the rotational movement determination processing.
  • FIG. 12 is a flowchart showing a detailed processing procedure of the no-motion determination process.
  • FIG. 13 is a flowchart showing a detailed processing procedure of the tube movement amount calculation processing.
  • FIG. 14 is a diagram illustrating an example of the processing target image.
  • FIG. 15 is a diagram illustrating how the capsule endoscope moves with respect to the subject when the processing target image of FIG. 14 is captured.
  • FIG. 15 is a diagram illustrating how the capsule endoscope moves with respect to the subject when the processing target image of FIG. 14 is captured.
  • FIG. 16 is a diagram illustrating an example of the processing target image.
  • FIG. 17 is a diagram illustrating how the capsule endoscope moves with respect to the subject when the processing target image of FIG. 16 is captured.
  • FIG. 18 is a model diagram of the inside of the tube and the capsule endoscope 3 for explaining the calculation processing of the tube movement amount.
  • FIG. 19 is a diagram illustrating an example of the processing target image.
  • FIG. 20 is a diagram illustrating how the capsule endoscope moves with respect to the subject when the processing target image of FIG. 19 is captured.
  • FIG. 21 is a diagram illustrating an example of the processing target image.
  • FIG. 22 is a diagram illustrating how the capsule endoscope moves with respect to the subject when the processing target image of FIG. 21 is captured.
  • FIG. 23 is an explanatory diagram illustrating a process of estimating the direction of the deep tube portion.
  • FIG. 24 is a model diagram of the inside of the tube and the capsule endoscope 3 for explaining the calculation processing of the tube movement amount.
  • FIG. 25 is a schematic diagram illustrating a result of the tube relative position calculation process.
  • FIG. 1 is a schematic diagram showing the overall configuration of an image processing acquisition system including an image processing apparatus according to an embodiment of the present invention.
  • a capsule endoscope is used as an example of an imaging device that moves in the inside of the body.
  • the image processing system is wirelessly transmitted from a capsule endoscope 3 that captures an image of a tube in the body of a subject 1 (an image in the tube), and the capsule endoscope 3.
  • a receiving device 4 that receives the intraluminal image data, an image processing device 70 that performs image processing on the intraluminal image received by the receiving device 4, and the like are provided.
  • a portable recording medium (portable recording medium) 5 is used for transferring the image data in the tube between the receiving apparatus 4 and the image processing apparatus 70.
  • the capsule endoscope 3 has an imaging function, a wireless function, an illumination function for illuminating an imaging part, and is swallowed from the mouth of the subject 1 such as a person or an animal for examination. And introduced into the subject 1. And until it is spontaneously discharged, it sequentially captures and obtains intraluminal images inside the esophagus, stomach, small intestine, large intestine, etc., and wirelessly transmits it outside the body.
  • the receiving device 4 receives the intraluminal image transmitted wirelessly from the capsule endoscope 3 via the plurality of receiving antennas A1 to An.
  • the receiving device 4 is configured so that the portable recording medium 5 can be freely attached and detached, and sequentially stores the received image data of the intraluminal image in the portable recording medium 5.
  • image data of the tube interior image captured from time t (0) at the tube entrance to time t (T) at the tube exit is portable recording in chronological order. It is assumed that it is stored in the medium 5.
  • the time t (0) at the entrance of the tube is equivalent to, for example, the time when the capsule endoscope 3 is introduced into the subject, and the time t (T) at the exit of the tube is capsule type. This corresponds to the time when the endoscope 3 is discharged from the body.
  • the receiving antennas A1 to An are constituted by, for example, loop antennas and are attached to the external surface of the subject 1. Specifically, they are dispersedly arranged at positions corresponding to the passage route of the capsule endoscope 3 in the subject 1.
  • the receiving antennas A1 to An may be arranged in a distributed manner on a jacket worn by the subject 1. In this case, the receiving antennas A1 to An are predetermined on the body surface of the subject 1 corresponding to the passage path of the capsule endoscope 3 in the subject 1 when the subject 1 wears this jacket. Placed in position.
  • one or more receiving antennas may be arranged with respect to the subject 1, and the number thereof is not limited.
  • the image processing apparatus 70 is realized by a general-purpose computer such as a workstation or a personal computer, and is configured such that the portable recording medium 5 can be freely attached and detached.
  • the image processing apparatus 70 acquires a time-series tube image stored in the portable recording medium 5 and displays the acquired tube image on a display such as an LCD or an ELD.
  • FIG. 2 is a block diagram illustrating the functional configuration of the image processing apparatus 70.
  • the image processing device 70 includes an external interface 71, an input unit 72, a display unit 73, a storage unit 74, a calculation unit 75, and a control unit 76 that controls the operation of the entire image processing device 70.
  • the external interface 71 is for acquiring image data of an intraluminal image captured by the capsule endoscope 3 and received by the receiving device 4.
  • the portable recording medium 5 is detachably mounted,
  • the reader unit is configured to read out image data of the intraluminal image stored in the portable recording medium 5.
  • the acquisition of the time-series tube images captured by the capsule endoscope 3 is not limited to the configuration using the portable recording medium 5.
  • a separate server may be installed, and a time-series tube image may be stored in advance in this server.
  • the external interface is configured by a communication device or the like for connecting to the server. Then, data communication is performed with the server via this external interface, and time-series images in the tube are acquired. Or it is good also as a structure which preserve
  • the input unit 72 is realized by, for example, a keyboard, a mouse, a touch panel, various switches, and the like, and outputs input instruction information to the control unit 76.
  • the display unit 73 is realized by a display device such as an LCD or an ELD, and displays various screens including a display screen of time-series images in the tube under the control of the control unit 76.
  • the storage unit 74 is realized by various IC memories such as ROM and RAM such as flash memory that can be updated and stored, an information storage medium such as a built-in hard disk or a CD-ROM connected via a data communication terminal, and a reading device thereof.
  • a program relating to the operation of the image processing apparatus 70, a program for realizing various functions provided in the image processing apparatus 70, data relating to execution of these programs, and the like are stored.
  • an image processing program 741 for the calculation unit 75 to calculate the position of the capsule endoscope 3 in the tube is stored.
  • the calculation unit 75 sequentially processes time-series tube images captured by the capsule endoscope 3 and performs various calculation processes related to the calculation of the position of the capsule endoscope 3 in the tube.
  • the calculation unit 75 includes an image pattern determination unit 751 as an image pattern determination unit, a tube movement amount calculation unit 754 as a tube movement amount calculation unit, and a tube relative position calculation unit 756 as a tube relative position calculation unit. .
  • the tube internal image is an image obtained by capturing the tube deep direction in a posture in which the capsule endoscope 3 faces the deep tube region. It can be determined that the image is an image of the inner wall of the tube in a posture facing a direction substantially perpendicular to the direction of the sky depth.
  • the image motion determination unit 753 associates the same target area appearing in each image between the images in the tube taken at different times in time series, and generates vector data (motion vector) representing the amount of change in the position. Based on the image of each tube, the relative movement of the capsule endoscope 3 with respect to the subject is determined. Specifically, the image motion determination unit 753 performs one of the movements of “no motion”, “parallel movement”, “movement toward the subject”, “movement away from the subject”, and “movement outside the setting”. Judged by classifying into patterns.
  • the tube movement amount calculation unit 754 determines the posture of the capsule endoscope 3 determined by the image pattern determination unit 751 and the relative movement of the capsule endoscope 3 with respect to the subject, and the tube space adjacent in time series.
  • the amount of movement in the direction (hereinafter referred to as “tube movement amount”) is calculated.
  • This tube movement amount calculation unit 754 includes a tube air depth direction estimation unit 755 as a tube air depth direction estimation means.
  • the tube void depth direction estimating unit 755 uses the tube void image for which the tube void depth region is not detected by the tube void depth detecting unit 752. It is estimated in which direction the tube depth is with respect to the subject to be reflected.
  • the tube relative position calculation unit 756 is based on the tube movement amount of the capsule endoscope 3 calculated by the tube movement amount calculation unit 754, and the capsule endoscope 3 of the capsule endoscope 3 when each tube aerial image is captured. Calculate the relative position with respect to the total length of the tube.
  • FIG. 3 is an overall flowchart showing a calculation processing procedure performed by the calculation unit 75 of the image processing apparatus 70. Note that the processing described here is realized by the calculation unit 75 reading and executing the image processing program 741 stored in the storage unit 74.
  • the computing unit 75 first initializes an index i indicating the time-series order of the target tube image to be “0” (step a1). Then, the calculation unit 75 passes through the external interface 71 and the control unit 76 in the tube at time t (i), which is a time-series previous tube image captured immediately before the tube image to be processed.
  • An image hereinafter referred to as “time-series pre-image I (i)” as appropriate
  • a tube-in-tube image at time t (i + 1) that is a tube-in-tube image to be processed hereinafter referred to as “process-target image I (i + 1) as appropriate. ) ”(Step a3).
  • the tubular deep portion detection unit 752 detects a deep tubular portion region from the processing target image I (i + 1). Since the distance from the capsule endoscope 3 is far away from the capsule endoscope 3, the illumination from the capsule endoscope 3 is difficult to reach and is obtained as a dark region.
  • the tube air depth detection unit 752 detects a region where dark pixels are gathered as a tube air deep region.
  • FIG. 4 is a flowchart showing a detailed processing procedure of the tube depth detection processing.
  • the tube depth detection unit 752 compares the G value of each pixel constituting the processing target image I (i + 1) with a predetermined threshold value, and sets each pixel of the processing target image I (i + 1).
  • a pixel having a G value equal to or less than a predetermined threshold is extracted as a dark pixel from the inside (step b1).
  • the G value is used here because it is close to the wavelength of the absorption band of hemoglobin in blood and has high sensitivity and resolution, and therefore expresses light and dark information of the intraluminal image well. Note that the dark pixel may be extracted using the value of the color component other than the G value.
  • the tube depth detection unit 752 performs a known labeling process on the dark pixel extracted in step b1, and assigns a unique value (label) to the adjacent dark pixel group (step b3). Thereby, the dark part area
  • a labeling processing method for example, the method disclosed in “CG-ARTS Association, Digital Image Processing, 181p, Labeling” can be used.
  • the tube depth detection unit 752 calculates the area of each dark area in the processing target image I (i + 1) determined in step b3, and extracts the dark area having the largest area (step b5).
  • the dark areas other than the deep part of the tube such as wrinkled shadows of the tubular mucous membrane, but these areas are usually smaller in area than the deep part of the tube. Identification is possible.
  • the duct depth detection part 752 compares the area of the dark part area
  • the tube air depth detection unit 752 does not show the tube air deep portion in the processing target image I (i + 1) (tube air deep portion). No area) (step b11), the process returns to step a5 in FIG.
  • FIG. 5 is a flowchart showing a detailed processing procedure of the image motion determination process.
  • the image motion determination unit 753 performs template matching using a plurality of setting regions set in the intraluminal image as templates in steps c1 to c5. Specifically, a motion vector at each template position is calculated between the pre-time-series image I (i) and the processing target image I (i + 1).
  • 6 is a diagram illustrating an example of the time-series pre-image I (i)
  • FIG. 7 is a diagram illustrating an example of the processing target image I (i + 1)
  • FIG. 8 is a time-series pre-image of FIG. It is a figure which shows the motion vector calculated based on the setting area
  • the image motion determination unit 753 sequentially sets each setting area 111 as a template, performs a known template matching process, and detects a corresponding area that most matches each template (high correlation value) from the processing target images.
  • a template matching method for example, a technique disclosed in Japanese Patent Laid-Open No. 9-102039 can be used. Specifically, with respect to the template image (pre-time series image), polar coordinate sampling points are determined in a geometric series in the radial direction and in a differential series in the angular direction to obtain a polar coordinate template image. A polar coordinate search image is similarly obtained for each scanning point of the search image (processing target image).
  • the image motion determination unit 753 based on each setting area of the pre-time-series image I (i) and the corresponding area of the corresponding processing target image I (i + 1) searched for. Then, the change in the center coordinates is calculated as a motion vector (step c5). For example, as shown in FIG. 8, the motion vectors 115 of the searched corresponding areas, more specifically, the corresponding areas searched for and successfully matched are calculated.
  • the obtained matching result data is held in the storage unit 74. For example, the success or failure of matching, the obtained corresponding area, its correlation value, the motion vector, etc. are stored in association with the identification number of the setting area as a template.
  • the image motion determination unit 753 uses the motion vector calculated in step c5 to relatively move the capsule endoscope 3 relative to the subject from time t (i) to t (i + 1). Determine. First, the image motion determination unit 753 compares the number of successful matches performed on the processing target image with a reference success number set in advance as a threshold, and if it is equal to or less than the reference success number (step c7: Yes), step Move to c27. This is because when the number of matching failures is large, it is considered that the reliability of the result of the image motion determination process cannot be obtained, and the processes of steps c9 to c25 are not performed. On the other hand, when the number of matching successes is larger than the reference success number (step c7: No), the image motion determination unit 753 executes a parallel movement determination process (step c9).
  • FIG. 9 is a flowchart showing a detailed processing procedure of the parallel movement determination process.
  • the image motion determination unit 753 determines whether or not the number of matching vectors is equal to or less than a predetermined number set in advance as the determination threshold value in step d1. Then, when the number of matching vectors is larger than the predetermined number (step d1: No), the image motion determination unit 753 calculates an average of the angles (directions) of the motion vectors, and an absolute value of a difference from the angle average is calculated.
  • step d3 The largest motion vector is selected and excluded from the processing targets after step d5 as an outlier. This is because there may be a motion vector matched at the wrong position when calculating the motion vector, and this is to improve the determination accuracy in consideration of this case. If the number of matching vectors that have been successfully matched is less than or equal to the predetermined number, the number of motion vectors used to determine the motion pattern may be reduced due to the exclusion of outliers and may be applied to an incorrect motion pattern. (Step d1: Yes), the process of step d3 for excluding outliers is not performed.
  • the image motion determination unit 753 calculates the variation in the angle of the motion vector based on the motion vector other than the motion vector excluded as an outlier (step d5).
  • angular dispersion is calculated as the angular variation.
  • the image motion determination unit 753 sets a vector angle dispersion threshold according to the number of matching vectors (step d7).
  • the vector angle dispersion threshold is set such that the smaller the number of matching vectors, the smaller the value. This is because when the number of motion vectors used for the parallel movement determination is small, there is a high possibility that the motion pattern accidentally applies to “parallel movement”.
  • the vector angle dispersion threshold is calculated according to the following equation (1) based on a preset reference vector angle dispersion threshold.
  • V p ′ is a vector angle dispersion threshold
  • V p is a reference vector angle dispersion threshold
  • p is the number of motion vectors successfully matched
  • N is the number of motion vectors when all matching is successful
  • ⁇ p is a conversion coefficient.
  • the image motion determination unit 753 classifies the motion pattern as “parallel movement” when the angular dispersion of the motion vector calculated in step d5 is equal to or less than the vector angular dispersion threshold set in step d7 (step d9: Yes). (Step d11).
  • the motion vectors in the processing target image I (i + 1) calculated based on the pre-time-series image I (i) and the processing target image I (i + 1) are directed in substantially the same direction, the motion pattern is “parallel movement”. ". Then, the image motion determination unit 753 returns to step c9 in FIG. 5 and then proceeds to step c11.
  • step c11 the image motion determination unit 753 determines whether or not the motion pattern is “parallel movement”. Then, when the motion pattern is “parallel movement”, that is, when the motion pattern is classified as “parallel movement” by the parallel movement determination processing in step c9 (step c11: Yes), the image motion determination unit 753 The process ends. On the other hand, when the motion pattern is not “parallel movement” (step c11: No), the image motion determination unit 753 executes subject direction movement determination processing (step c13).
  • FIG. 10 is a flowchart showing a detailed processing procedure of subject direction movement determination processing.
  • the image motion determination unit 753 first sets a predetermined number of center point candidates in the processing target image I (i + 1) (step e1). The position and number of center point candidates set in the processing target image I (i + 1) are set in advance.
  • the image motion determination unit 753 selects a depth center point from the center point candidates.
  • the depth center point here is a position in the moving direction of the movement of the capsule endoscope 3 appearing on the image, for example, a position in the moving direction of the capsule endoscope 3 on the processing target image I (i + 1).
  • the closest center point candidate is selected as the depth center point.
  • the image motion determination unit 753 calculates, for each center point candidate, a vector from the center point candidate to the start point of each motion vector (hereinafter referred to as “start point vector”), and then each center point.
  • an inner product (hereinafter referred to as “motion vector inner product”) with a motion vector starting from the position indicated by the starting point vector is calculated (step e3).
  • the image motion determination unit 753 calculates the average of the motion vector inner products for each center point candidate (step e5), and sets the center point candidate having the maximum absolute value of the calculated average motion vector inner product as the depth center point. Select (step e7).
  • the center point candidate that is the correct depth center point is the center point candidate as the start point of each motion vector.
  • the direction of each connected starting point vector is close to the direction of the motion vector starting from the position indicated by the starting point vector. At this time, the direction of the start point vector and the motion vector are the same for “movement toward the subject”, and the opposite direction for “movement toward the subject”.
  • the fact that the direction of the start point vector and the motion vector are close means that the angle formed by these is close to 0 degrees or 180 degrees, and the absolute value of the inner product (motion vector inner product) of the start point vector and the motion vector The value is close to “1”. Therefore, the average of the motion vector inner products is calculated for each center point candidate, and the center point candidate having the maximum absolute value of the average motion vector inner product among the center point candidates is selected as the depth center point. Note that when a motion vector is calculated at a position very close to the depth center point, the motion vector may be a zero vector. For the 0 vector, the motion vector inner product value is processed as “1.0” when the average is a positive value and “ ⁇ 1.0” when the average is a negative value.
  • the image motion determination unit 753 determines the number of matching failures and the number of motion vectors equal to or less than a preset reference inner product value in which the absolute value of the motion vector inner product from the depth center point is the preset direction vector direction.
  • the sum of the motion vectors that do not match (that is, the depth direction) is calculated, and if it is less than the predetermined number set in advance as the threshold for determination in step e9 (step e9: No), the process proceeds to step e11. To do. If the number is greater than or equal to the predetermined number (step e9: Yes), this process is terminated.
  • the process after step e9 in the movement determination process is not performed. Specifically, if the average motion vector inner product is a positive value, the motion vector inner product value is less than or equal to a predetermined inner product lower limit value, and the direction does not match the direction of the start point vector. judge. If the average motion vector inner product is a negative value, a motion vector having a motion vector inner product value greater than or equal to a predetermined inner product upper limit value is determined to have a direction that does not match the direction of the starting point vector. Then, it is determined whether the sum of the number of motion vectors having a direction that does not coincide with the direction of the start point vector and the number of matching failures is equal to or greater than a predetermined number.
  • step e11 the image motion determination unit 753 sets the inner product threshold value for approach determination and the inner product threshold value for separation determination based on the number of matching vectors.
  • the approach determination inner product threshold is set such that the smaller the number of matching vectors, the closer the value is to “1”. This is because, when the number of motion vectors used for the subject direction movement determination is small, there is a high possibility that the motion pattern will accidentally apply to “movement toward the subject”, and the inner product threshold value for proximity determination is set high.
  • the inner product threshold value for approach determination is calculated according to the following equation (2) based on a preset inner product threshold value for reference approach determination.
  • V b ′ is the inner product threshold value for approach determination
  • V b is the inner product threshold value for reference approach determination
  • p is the number of motion vectors successfully matched
  • N is the number of motion vectors when all matching is successful
  • ⁇ b is Represents a conversion factor.
  • the inner product threshold for separation determination is set so that the smaller the number of matching vectors, the closer the value is to “ ⁇ 1”. This is because when the number of motion vectors used for subject direction movement determination is small, there is a high possibility that the motion pattern will accidentally apply to “movement away from the subject”, and the inner product threshold value for distance determination is set low.
  • the distance determination inner product threshold is calculated according to the following equation (3) based on a preset reference distance determination inner product threshold.
  • V d ′ is an inner product threshold value for distance determination
  • V d is an inner product threshold value for reference distance determination
  • p is the number of motion vectors successfully matched
  • N is the number of motion vectors when all matching is successful
  • ⁇ d is Represents a conversion factor.
  • step e15 the image motion determination unit 753 sets the motion pattern to “ The movement is classified as “movement toward the subject”. If each motion vector in the processing target image I (i + 1) is substantially positive with the corresponding start point vector, the motion pattern is classified as “movement toward the subject”.
  • step e17 when the average of the motion vector inner products calculated for the depth center point in step e5 is equal to or smaller than the distance determination inner product threshold set in step e11 (step e17: Yes), the image motion determination unit 753 sets the motion pattern to “ “Movement in a direction away from the subject” (step e19).
  • the motion pattern is “moving away from the subject”. being classified. Then, the image motion determination unit 753 returns to step c13 in FIG. 5 and then proceeds to step c15.
  • step c15 of FIG. 5 the image motion determination unit 753 determines whether or not the motion pattern is “movement toward the direction of approaching the subject”. Then, the image motion determination unit 753 classifies the motion pattern as “movement toward the subject” by the subject direction movement determination processing in step c13 when the motion pattern is “movement toward the subject”. If it has been done (step c15: Yes), this process is terminated. On the other hand, when the motion pattern is not “movement toward the subject” (step c15: No), the image motion determination unit 753 continues to determine whether or not the motion pattern is “movement away from the subject”. Determine whether.
  • the image motion determination unit 753 classifies the motion pattern as “movement toward the direction away from the subject” when the motion pattern is “movement toward the direction away from the subject”, that is, the subject direction movement determination process in step c13. In this case (step c17: Yes), this process ends. If the movement pattern is not “movement away from the subject” (step c17: No), the image movement determination unit 753 executes a rotation movement determination process (step c19).
  • FIG. 11 is a flowchart showing a detailed processing procedure of the rotational movement determination process.
  • the image motion determination unit 753 first sets a predetermined center point candidate in the processing target image (step f1).
  • the image motion determination unit 753 selects a rotation center point from the center point candidates. Specifically, the image motion determination unit 753 calculates, for each center point candidate, a start point vector from the center point candidate to the start point of each motion vector, and subsequently, for each start point vector calculated for each center point candidate. Then, a motion vector inner product with a motion vector starting from the position indicated by the start point vector is calculated (step f3).
  • the image motion determination unit 753 calculates the average of the absolute values of the motion vector inner products for each center point candidate (step f5), and calculates the center point candidate having the minimum absolute value average of the calculated motion vector inner products as the rotation center.
  • a point is selected (step f7). Assuming that the motion pattern is “rotational movement”, the center point candidate that is the correct center point of rotation is that each start point vector connecting the center point candidate with the start point of each motion vector is the start point indicated by the start point vector. The absolute value of the inner product (motion vector inner product) of the start point vector and the motion vector is close to “0”. Therefore, the absolute value average of the motion vector inner product is calculated for each center point candidate, and the center point candidate having the minimum absolute value of the motion vector inner product among the center point candidates is selected as the rotation center point.
  • the image motion determination unit 753 determines the motion pattern when the absolute value average of the motion vector inner products calculated for the rotation center point selected in step f7 is equal to or less than a preset rotation determination threshold value (step f9: Yes). It is classified as “rotational movement” (step f11). Then, the image motion determination unit 753 returns to step c19 in FIG. 5, and then proceeds to step c21.
  • step c21 the image motion determination unit 753 determines whether or not the motion pattern is “rotation movement”. Then, when the motion pattern is “rotational movement”, that is, when the motion pattern is classified as “rotational movement” by the rotational movement determination process in step c19 (step c21: Yes), the image motion determination unit 753 The process ends. On the other hand, when the motion pattern is not “rotational movement” (step c21: No), the image motion determination unit 753 executes a no-motion determination process (step c23).
  • FIG. 12 is a flowchart showing a detailed processing procedure of the no-motion determination process.
  • the image motion determination unit 753 first determines whether the number of vectors obtained by successful matching (the number of matching vectors) is equal to or less than a predetermined number set in advance as a determination threshold in step g1. Determine whether. If the image motion determination unit 753 is greater than the predetermined number (step g1: No), the motion vector having the maximum value is selected and excluded from the target of processing after step g5 as an outlier ( Step g3). This is to improve the determination accuracy in consideration of the case where matching is taken at the wrong position during motion vector calculation, as in the parallel movement determination process, and the number of matching vectors successfully matched is a predetermined number. In the following case (step g1: Yes), the process of step g3 for excluding outliers is not performed.
  • the image motion determination unit 753 calculates an average value of the magnitudes of motion vectors based on motion vectors other than motion vectors excluded as outliers (step g5). Subsequently, the image motion determination unit 753 sets a vector average threshold according to the number of matching vectors (step g7). The vector average threshold is set such that the smaller the number of matching vectors, the smaller the value. This is because when the number of motion vectors used for the determination of no motion is small, there is a high possibility that the motion pattern accidentally falls into “no motion”.
  • this vector average threshold is calculated according to the following equation (4) based on a preset reference vector average threshold.
  • V s ′ is a vector average threshold
  • V s is a reference vector average threshold
  • p is the number of motion vectors successfully matched (number of matching vectors)
  • N is the number of motion vectors when all matches are successful (time series)
  • ⁇ s represents a conversion coefficient, corresponding to the number of pixel areas set in the previous image I (i).
  • the image motion determination unit 753 sets the motion pattern to “no motion” when the average value of the motion vectors calculated in step g5 is equal to or less than the vector average threshold set in step g7 (step g9: Yes). (Step g11). When the pre-time-series image I (i) and the processing target image I (i + 1) hardly change, the motion vector calculated in the processing target image I (i + 1) is small and the motion pattern is classified as “no motion”. The Then, the image motion determination unit 753 returns to step c23 in FIG. 5 and then proceeds to step c25.
  • step c25 the image motion determination unit 753 determines whether or not the motion pattern is “no motion”. Then, when the motion pattern is “no motion”, that is, when the motion pattern is classified as “no motion” by the no motion determination process in step c23 (step c25: Yes), the image motion determination unit 753 The process ends. On the other hand, when the motion pattern is not “no motion” (step c25: No), the image motion determination unit 753 proceeds to step c27.
  • the image motion determination unit 753 classifies the motion pattern as “motion not set”. That is, the image motion determination unit 753 classifies the motion pattern as “motion not set” when it is determined in step c7 that the number of successful matches is equal to or less than the reference success number.
  • the processing target image I (i + 1) is compared with the template set in the time series pre-image I (i). Since the matching position cannot be obtained or the correlation value at the obtained template matching position becomes low, there are many places where matching fails.
  • the motion pattern is classified as “motion not set”. Furthermore, even when the number of templates that have been successfully matched is greater than the reference success number, the image motion determination unit 753 performs the process of steps c9 to c25 so that the motion pattern is “translation” or “approaching to the direction of approaching the subject”.
  • the movement patterns that are not set to any of “movement”, “movement in a direction away from the subject”, “rotational movement”, and “no movement” are classified as “movement outside setting”.
  • the motion pattern is “unset motion. "are categorized. This is because there is no position where correct template matching can be taken, and therefore, when a motion vector is calculated at an incorrect matching position, the mucous membrane in the body cavity imaged by the capsule endoscope 3 actually moves irregularly, etc. Can be considered.
  • the image motion determination unit 753 returns to step a7 in FIG.
  • the movement pattern classification result obtained by the image movement determination process is held in the storage unit 74.
  • step a9 in FIG. 3 capsule-type endoscopy from time t (i) to t (i + 1) based on the presence / absence of a deep tube region in the processing target image I (i + 1) and the classified motion pattern. The amount of tube movement of the mirror 3 toward the deep tube portion is calculated.
  • FIG. 13 is a flowchart showing a detailed processing procedure of the tube movement amount calculation processing.
  • the tube movement amount calculation unit 754 first determines the classified movement pattern, and in the case of “no movement” or “rotation movement” (step h1: Yes), the capsule endoscope 3 faces the deep portion of the tube. Therefore, the tube movement amount is set to “0” (step h3). Then, the pipe movement amount calculation unit 754 returns to step a9 in FIG. If the motion pattern is not “no motion” or “rotational movement” (step h1: No), the tube movement amount calculation unit 754 determines whether or not there is a deep tube region in the processing target image I (i + 1).
  • step h5: Yes when a tube air deep region is detected from the processing target image I (i + 1) (step h5: Yes), the process proceeds to step h7, and if there is no tube air deep region ( Step h5: No), the process proceeds to step h17.
  • step h7 the tube movement amount calculation unit 754 determines whether or not the movement pattern is “parallel movement”. If the movement pattern is “parallel movement” (step h7: Yes), the tube movement amount is set to “0”. (Step h9). Then, the pipe movement amount calculation unit 754 returns to step a9 in FIG.
  • FIG. 14 is a diagram illustrating an example of the processing target image I (i + 1), and FIG. 15 illustrates how the capsule endoscope 3 moves with respect to the subject when the processing target image I (i + 1) in FIG. 14 is captured.
  • the motion vectors 123 are directed in substantially the same direction, and the motion pattern is classified as “parallel movement”.
  • the capsule endoscope 3 at the time of imaging is directed toward the deep tube region as shown in FIG. It can be estimated that the subject is moving parallel to the subject in a posture facing toward. Therefore, since it is considered that the capsule endoscope 3 has not moved toward the deep tube portion, the tube movement amount is set to “0”.
  • the tube movement amount calculation unit 754 determines whether the movement pattern is “movement toward the direction of approaching the subject” or “movement toward the direction away from the subject”. Determine whether or not. And, when it is not “movement toward the subject” or “movement away from the subject”, that is, when the deep tube region is detected and the movement pattern is “non-setting movement” (Step h11: No), the tube movement amount calculation unit 754 sets a predetermined value set in advance for the tube movement amount (step h13). For example, an average predetermined amount of movement of the capsule endoscope 3 at an imaging interval obtained empirically is set as the predetermined value. Then, the pipe movement amount calculation unit 754 returns to step a9 in FIG.
  • FIG. 16 is a diagram illustrating an example of the processing target image I (i + 1)
  • FIG. 17 illustrates how the capsule endoscope 3 moves with respect to the subject when the processing target image I (i + 1) in FIG. 16 is captured.
  • FIG. in the processing target image I (i + 1) in FIG. 16 a deep tube region 131 is detected.
  • each motion vector 133 is substantially positive with the corresponding start point vector (direction from the selected depth center point toward the motion vector), and the image motion pattern is classified as “movement toward the subject”.
  • the capsule endoscope 3 at the time of imaging is as shown in FIG.
  • the robot is moving toward the deep tube portion in a posture facing the deep tube portion.
  • the capsule endoscope 3 at the time of imaging is directed toward the deep tube region. It can be estimated that it is moving in the opposite direction to the direction of the deep tube in the posture facing the.
  • the tube movement amount calculation unit 754 sets the set region and the processing target image I (i + 1) in the pre-time-series image I (i) in which the correspondence relationship is set as the same target region. )
  • the amount of tube movement step h15.
  • FIG. 18 is a model diagram of the inside of the tube and the capsule endoscope 3 for explaining the calculation processing of the tube movement amount in step h15.
  • the imaging state model of the capsule endoscope 3 that images the time-series pre-image I (i) including the setting area that reflects the target area 141 in the tube at time t (i) is shown in the upper stage.
  • An imaging state model of the capsule endoscope 3 that captures the processing target image I (i + 1) including the corresponding area corresponding to the setting area at time t (i + 1) is shown in the lower part.
  • D represents the target region distance obtained by projecting the distance from the capsule endoscope 3 to the target region 141 of the tube inner wall at time t (i) onto the tube inner wall surface
  • D ′ represents the time t.
  • O is an optical center corresponding to the principal point of an optical system such as a lens of the capsule endoscope 3.
  • R is the hollow radius. For example, an average tube radius is used as the tube radius R.
  • the image coordinates 142a shown in the upper model diagram of FIG. 18 are the image coordinates of the time-series pre-image I (i) obtained by being projected on the imaging element of the capsule endoscope 3 by this imaging situation model.
  • the image coordinate 142a is a coordinate system with the origin at the position that intersects the optical axis 143 of the capsule endoscope 3, and f is the distance from the optical center O of the capsule endoscope 3 to the image sensor.
  • the coordinates of the center of the setting area in which the target area 141 in the pre-time-series image I (i) obtained by this imaging situation model is shown are the structure area center coordinates T (xT, yT), and the pre-time-series image I
  • the coordinate of the gravity center position of the deep tube portion in (i) is defined as the deep tube portion gravity center coordinate C (xC, yC).
  • is an angle formed by a vector OC from the optical center O toward the deep tube portion (at the center of gravity of the deep tube portion) 144 at time t (i) and a vector OT from the optical center O to the target region 141.
  • an image coordinate 142b shown in the model diagram in the lower part of FIG. 18 is an image coordinate of the processing target image I (i + 1) obtained by the imaging state model.
  • the image coordinate 142b is a coordinate system with the origin at the position that intersects the optical axis 143 of the capsule endoscope 3, and f is the distance from the optical center O of the capsule endoscope 3 to the image sensor.
  • the coordinates of the center of the corresponding area in which the target area 141 in the processing target image I (i + 1) obtained by the imaging situation model is displayed are the corresponding area center coordinates T ′ (xT ′, yT ′), and this processing target image is displayed.
  • the coordinate of the gravity center position of the deep tube portion at I (i + 1) is defined as the deep tube portion gravity center coordinate C ′ (xC ′, yC ′).
  • ⁇ ′ is an angle formed by a vector OC ′ from the optical center O to the tube air depth direction 144 and a vector OT ′ from the optical center O to the target region 141 at time t (i + 1).
  • represents the pitch of the imaging element of the capsule endoscope 3.
  • the values of the camera parameters such as the distance f and the imaging element pitch ⁇ are acquired in advance.
  • the structure region center coordinate T ′, the tube depth deep center of gravity coordinate C ′, the distance f, and the tube radius radii R of the imaging state model in the lower part of FIG. can get.
  • DD ′ shown in the equation (8) is the distance from the capsule endoscope 3 to the target region 141 of the tube inner wall at each time t (i) and t (i + 1) on the tube inner wall surface. Is the difference between the target region distances projected onto the tube and corresponds to the tube movement amount (movement amount along the tube space) d of the capsule endoscope 3 from time t (i) to t (i + 1) shown in the lower part of FIG. To do. By obtaining D ⁇ D ′ in this way, the amount of tube movement of the capsule endoscope 3 from time t (i) to t (i + 1) can be calculated.
  • the tube movement amount calculation unit 754 matches the pattern between the pre-time-series image I (i) and the processing target image I (i + 1) and the movement pattern (“to approach the subject”). The movement amount of the tube is calculated for each combination of the setting area and the corresponding area in which the motion vector matching the “movement of the movement” or “movement of the object away from the subject”) is obtained. Then, the tube movement amount calculation unit 754 calculates an average value of the obtained values of the plurality of tube movement amounts, and the tube movement amount from time t (i) to t (i + 1) of the capsule endoscope 3. Calculate as
  • the pipe movement amount calculation unit 754 returns to step a9 in FIG.
  • step h17 in FIG. 13 the tube movement amount calculation unit 754 determines whether or not the movement pattern is “parallel movement”. If it is not “parallel movement” (step h17: No), the tube movement amount calculation unit 754 determines whether the movement pattern is “movement toward the direction approaching the subject” or “movement toward the direction away from the subject”. To do.
  • the tube movement amount calculation unit 754 sets the tube movement amount to “0” (step h19: Yes). Step h21). Then, the pipe movement amount calculation unit 754 returns to step a9 in FIG. 19 is a diagram illustrating an example of the processing target image I (i + 1), and FIG. 20 illustrates how the capsule endoscope 3 moves with respect to the subject when the processing target image I (i + 1) of FIG. 19 is captured. FIG. In the processing target image I (i + 1) in FIG. 19, the deep tube region is not detected.
  • each motion vector 151 is substantially positive with the corresponding start point vector (direction from the selected depth center point 153 toward the motion vector), and the image motion pattern is classified as “movement toward the subject”. Is done.
  • the capsule endoscope 3 at the time of imaging is as shown in FIG.
  • the robot is moving so as to approach the inner wall of the tube in a posture that is oriented substantially perpendicular to the direction of the deep tube.
  • the capsule endoscope 3 at the time of imaging is It can be estimated that the robot is moving away from the inner wall of the tube in a posture facing a direction substantially orthogonal to the direction. Therefore, since it is considered that the capsule endoscope 3 has not moved toward the deep tube portion, the tube movement amount is set to “0”.
  • the tube movement amount calculation unit 754 sets a predetermined value for the tube movement amount (step h23). Then, the pipe movement amount calculation unit 754 returns to step a9 in FIG.
  • FIG. 21 is a diagram illustrating an example of the processing target image I (i + 1)
  • FIG. 22 illustrates how the capsule endoscope 3 moves with respect to the subject when the processing target image I (i + 1) in FIG. 21 is captured.
  • the capsule endoscope 3 at the time of imaging is directed toward the deep tube region as shown in FIG. It is estimated that the tube moves in the direction substantially perpendicular to the tube depth along the tube depth, as indicated by the solid line in FIG. 22, or in the direction opposite to the tube depth as indicated by the alternate long and short dash line. it can.
  • FIG. 23 is an explanatory diagram illustrating a process of estimating the direction of the deep tube portion.
  • the tube depth direction estimation unit 755 searches the tube interior image in which the tube depth region is detected by going back in time sequence from the pre-time-series image I (i).
  • the tube aerial image I (i ⁇ 1) captured at time t (i ⁇ 1) two times before the processing target image I (i + 1) in time series.
  • a deep air region 171 has been detected, and this in-pipe image I (i-1) is searched.
  • the capsule endoscope 3 captures the tube interior image I (i-1) from the motion vector in a posture facing the tube interior depth as shown in the lower part of FIG. It can be estimated that the pre-time-series image I (i) is picked up by changing the posture in a direction substantially orthogonal to the air depth direction, and is translated in the direction opposite to the tube air depth direction along the tube air depth direction.
  • the deep tube direction estimation unit 755 sequentially averages the motion vectors from the searched tube inner image to the processing target image I (i + 1), and based on the obtained motion vector average, the processing target image I ( Estimate the direction of the depth of the tube with respect to the subject shown in i + 1). For example, in the example of FIG. 23, first, in the time-series image I (i-1), a vector 173 from the center coordinate of the image to the center coordinate of the deep tube region 171 is calculated and set to the deep tube region.
  • the average of the motion vectors 175 of the pre-time-series image I (i) is calculated, and the direction of the vector obtained by adding the average of the motion vectors calculated in the direction of the deep tube is reflected in the pre-time-series image I (i). It is estimated that the direction of the tube is deep with respect to the subject (estimated tube depth). Subsequently, the average of the motion vectors 177 of the processing target image I (i + 1) is calculated. Then, the obtained motion vector average is added to the estimated tube air depth direction, and the obtained estimated tube air depth direction is estimated as the tube air depth direction with respect to the subject shown in the processing target image I (i + 1).
  • the estimated duct depth direction estimates a rough direction of the duct depth section with respect to the subject of the capsule endoscope 3 at the time of imaging.
  • the position of the deep tube portion is on the right side of the image, and all the motion vectors 175 and 177 are directed to the right side in FIG.
  • the direction is also obtained as a vector pointing to the right. For this reason, it is presumed that there is a deep tube direction on the right side of the processing target image I (i + 1), that is, on the right side of the subject shown in the processing target image I (i + 1).
  • the tube movement amount calculation unit 754 uses the set area in the pre-time-series image I (i) and the corresponding area in the processing target image I (i + 1) in which the correspondence is set as the same target area.
  • the tube movement amount is calculated (step h27).
  • FIG. 24 is a model diagram of the inside of the tube and the capsule endoscope 3 for explaining the calculation processing of the tube movement amount in step h27.
  • the imaging state model of the capsule endoscope 3 that has captured the time-series pre-image I (i) including the setting area that reflects the target area 181 in the tube at time t (i) is shown in the upper part.
  • An imaging state model of the capsule endoscope 3 that captures the processing target image I (i + 1) including the corresponding area corresponding to the setting area at time t (i + 1) is shown in the lower part.
  • D represents the distance from the intersection of the optical center O of the capsule endoscope 3 and the tube inner wall at the time t (i) to the target region 181
  • D ′ represents the capsule at the time t (i + 1).
  • the distance from the intersection of the optical center O of the mold endoscope 3 and the inner wall of the tube to the target region 181 is represented.
  • O is an optical center corresponding to the principal point of an optical system such as a lens of the capsule endoscope 3.
  • h is the distance from the capsule endoscope 3 to the inner wall of the tube.
  • the distance h for example, the distance from the optical center O to the outer wall of the capsule endoscope 3 is used assuming that the outer wall of the capsule endoscope 3 and the inner wall of the tube are almost in close contact.
  • the pixel value v is proportional to the irradiation light amount u of the imaging device and inversely proportional to the square of the distance h.
  • the distance h may be calculated using this relational expression (9).
  • the pixel value v may be a pixel value of a specific pixel in the image, or may be an average value of pixel values of all pixels.
  • is a predetermined coefficient.
  • the image coordinates 182a are the image coordinates of the time-series pre-image I (i) obtained by being projected on the imaging element of the capsule endoscope 3 by this imaging situation model. is there.
  • the image coordinate 182a is a coordinate system with the origin at the position that intersects the optical axis 183 of the capsule endoscope 3, and f is the distance from the optical center O of the capsule endoscope 3 to the image sensor.
  • the coordinates of the center of the setting area in which the target area 181 in the pre-time-series image I (i) obtained by the imaging situation model is set as the setting area center coordinates A (xA, yA), and the origin of the image coordinates 182a.
  • is an angle formed by a vector OS from the optical center O in the direction of the optical axis 183 and a vector OA from the optical center O to the target region 181 at time t (i).
  • the image coordinates 182b shown in the model diagram in the lower part of FIG. 24 are the image coordinates of the processing target image I (i + 1) obtained by this imaging state model.
  • the image coordinate 182b is a coordinate system with the origin at the position that intersects the optical axis 183 of the capsule endoscope 3, and f is the distance from the optical center O of the capsule endoscope 3 to the image sensor.
  • the coordinates of the center of the corresponding area in which the target area 181 in the processing target image I (i + 1) obtained by this imaging situation model is taken are the corresponding area center coordinates A ′ (xA ′, yA ′), and the image coordinates 182b
  • the origin coordinates be origin coordinates S ′ (xS ′, yS ′).
  • ⁇ ′ is the angle formed by the vector OS ′ from the optical center O in the direction of the optical axis 183 and the vector OA ′ from the optical center O to the target area 181 at time t (i + 1).
  • the tube movement amount calculation unit 754 matches the motion pattern (for example, “in the example of FIG. 23” in the example of FIG. 23) by matching between the pre-time-series image I (i) and the processing target image I (i + 1).
  • the tube movement amount is calculated for each combination of the setting area and the corresponding area, which is a motion vector matching the “translation” ”and the direction of the motion vector is within a predetermined threshold range.
  • the tube movement amount calculation unit 754 calculates the tube movement amount d from time t (i) to t (i + 1) of the capsule endoscope 3 based on the obtained values of the plurality of tube movement amounts. It calculates according to Formula (14).
  • the tube movement amount from time t (i) to t (i + 1) is calculated as a value obtained by adding a negative sign to the obtained value.
  • the tube movement amount calculation unit 754 returns to step a9 in FIG.
  • the tube relative position calculation unit 756 firstly moves the tube movement amount of the capsule endoscope 3 from time t (i) to t (i + 1) calculated as a result of the tube movement amount calculation process. Is obtained for each time. This accumulated value corresponds to the moving distance of the capsule endoscope 3 from the time t (0) at the entrance of the tube to the time t (i + 1) at the time of imaging.
  • the tube relative position calculation unit 756 calculates the accumulated value of the obtained tube movement amount as the sum of the tube movement amounts from the time t (0) at the tube inlet to the time t (T) at the outlet, that is, the total length of the tube empty. To calculate the relative position of the capsule endoscope 3 in the tube at each time. Thus, the relative movement position in the longitudinal direction of the tube from the tube entrance when each image is taken when the total length of the tube is 1.0 is obtained.
  • FIG. 25 is a schematic diagram showing the result of the tube relative position calculation process, in which the horizontal axis is time t, and the vertical axis is the relative position of the capsule endoscope 3 from the tube empty inlet to the tube empty outlet.
  • the time series change of the relative position of the capsule endoscope 3 is shown. According to this, on the basis of the imaging time of the tube interior image showing the affected part, the tube entrance or tube exit of the capsule endoscope 3 in the tube space when the tube interior image is captured. It is possible to know the relative position from and to estimate the position of the affected area.
  • the capsule endoscope 3 is By determining the posture and how it moves, the amount of tube movement of the capsule endoscope 3 toward the deep portion of the tube can be easily calculated. According to this, it is possible to realize movement amount estimation that is less susceptible to the influence of the external environment as compared with the conventional case where movement amount estimation is performed based on the reception intensity of the electromagnetic wave signal. There is an effect that the estimation accuracy of the movement amount can be improved.
  • the description has been made from the tube entrance to the exit, but the position of the capsule endoscope 3 based on the start position or end position of a specific organ such as the small intestine or the large intestine. May be estimated. According to this, it is possible to obtain information on how much the affected part is located along the lumen from the start position or end position of a specific organ.
  • the image processing device and the image processing program of the present invention are useful for calculating the amount of movement of the imaging device that moves inside the body of the tube toward the deep part of the tube.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Endoscopes (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Dans un dispositif de traitement d'image, une section de détermination de motif d'image (751) détermine la position d'un endoscope en capsule (dispositif d'imagerie) et le trajet de déplacement relatif de l'endoscope en capsule (dispositif d'imagerie) par rapport à un sujet lorsque des images intraluminales sont capturées. Une section tubulaire de calcul de distance parcourue (754) calcule la distance parcourue dans la direction de profondeur de lumière de l'endoscope en capsule parcourue à partir du moment où l'une des images intraluminales est capturée jusqu'à ce que l'autre des images intraluminales soit capturée sur la base de la relation de correspondance entre le type et le trajet de déplacement du sujet déterminée par la section de détermination de motif d'image (751) et la même zone cible réfléchie entre des images intraluminales à proximité l'une de l'autre dans un ordre chronologique.
PCT/JP2008/070783 2007-12-21 2008-11-14 Dispositif de traitement d'image et programme de traitement d'image Ceased WO2009081669A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007-330235 2007-12-21
JP2007330235A JP2009148468A (ja) 2007-12-21 2007-12-21 画像処理装置および画像処理プログラム

Publications (1)

Publication Number Publication Date
WO2009081669A1 true WO2009081669A1 (fr) 2009-07-02

Family

ID=40800984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/070783 Ceased WO2009081669A1 (fr) 2007-12-21 2008-11-14 Dispositif de traitement d'image et programme de traitement d'image

Country Status (2)

Country Link
JP (1) JP2009148468A (fr)
WO (1) WO2009081669A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5622605B2 (ja) * 2011-02-09 2014-11-12 オリンパスメディカルシステムズ株式会社 受信ユニット
WO2013015104A1 (fr) * 2011-07-22 2013-01-31 オリンパスメディカルシステムズ株式会社 Système endoscope de type capsule, procédé d'affichage d'image et programme d'affichage d'image
JP2016213657A (ja) * 2015-05-08 2016-12-15 日本放送協会 符号化ブロックサイズ決定装置、符号化装置、及びプログラム
JP6745748B2 (ja) * 2017-03-16 2020-08-26 富士フイルム株式会社 内視鏡位置特定装置、その作動方法およびプログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005334331A (ja) * 2004-05-27 2005-12-08 Olympus Corp カプセル投薬システム
JP2006212051A (ja) * 2005-02-01 2006-08-17 Yamaha Corp 錠剤型撮像装置、体内撮像システム及び体内撮像方法
JP2006334297A (ja) * 2005-06-06 2006-12-14 Olympus Medical Systems Corp 画像表示装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005334331A (ja) * 2004-05-27 2005-12-08 Olympus Corp カプセル投薬システム
JP2006212051A (ja) * 2005-02-01 2006-08-17 Yamaha Corp 錠剤型撮像装置、体内撮像システム及び体内撮像方法
JP2006334297A (ja) * 2005-06-06 2006-12-14 Olympus Medical Systems Corp 画像表示装置

Also Published As

Publication number Publication date
JP2009148468A (ja) 2009-07-09

Similar Documents

Publication Publication Date Title
US10646288B2 (en) Automated steering systems and methods for a robotic endoscope
US10860930B2 (en) Learning method, image recognition device, and computer-readable storage medium
US8107686B2 (en) Image procesing apparatus and image processing method
EP2906133B1 (fr) Détermination de la position d'un dispositif médical dans une structure anatomique ramifiée
JP6405138B2 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
US20220254017A1 (en) Systems and methods for video-based positioning and navigation in gastroenterological procedures
US10178941B2 (en) Image processing apparatus, image processing method, and computer-readable recording device
CN112435740B (zh) 信息处理设备、检查系统、信息处理方法和存储介质
US20130002842A1 (en) Systems and Methods for Motion and Distance Measurement in Gastrointestinal Endoscopy
KR20070094746A (ko) 의료 화상 처리 장치 및 의료 화상 처리 방법
JP5085370B2 (ja) 画像処理装置および画像処理プログラム
CN111035351B (zh) 用于胃肠道中的胶囊相机的行进距离测量的方法及装置
CN115209782A (zh) 内窥镜系统和基于内窥镜系统的管腔扫描方法
CN112508840A (zh) 信息处理设备、检查系统、信息处理方法和存储介质
JP2009261798A (ja) 画像処理装置、画像処理プログラムおよび画像処理方法
WO2009081669A1 (fr) Dispositif de traitement d'image et programme de traitement d'image
WO2004045397A1 (fr) Dispositif de detection de la direction d'insertion d'un endoscope, systeme de detection de la direction d'insertion de l'endoscope, et procede de detection de la direction d'insertion de l'endoscope
Elvira et al. CudaSIFT-SLAM: multiple-map visual SLAM for full procedure mapping in real human endoscopy
CN113409386A (zh) 用于基于图像的定位的方法、系统和介质
Allain et al. Re-localisation of a biopsy site in endoscopic images and characterisation of its uncertainty
CN111936030B (zh) 界标估计方法、内窥镜装置及存储介质
Dei et al. Adjunct tools for colonoscopy enhancement: a comprehensive review
EP4191527A1 (fr) Dispositif de traitement d'images d'endoscope
Armin Automated visibility map from colonoscopy video to support clinical diagnosis and improve the quality of colonoscopy
JP2025072849A (ja) リアルタイムの処置中の内視鏡シャフト動き追跡のためのシステムおよび方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08864028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08864028

Country of ref document: EP

Kind code of ref document: A1