WO2022049807A1 - 内視鏡システム及びその作動方法 - Google Patents
内視鏡システム及びその作動方法 Download PDFInfo
- Publication number
- WO2022049807A1 WO2022049807A1 PCT/JP2021/008993 JP2021008993W WO2022049807A1 WO 2022049807 A1 WO2022049807 A1 WO 2022049807A1 JP 2021008993 W JP2021008993 W JP 2021008993W WO 2022049807 A1 WO2022049807 A1 WO 2022049807A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- measurement
- light
- scale
- endoscope
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1079—Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00006—Operational features of endoscopes characterised by electronic signal processing of control signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
- A61B1/0005—Display arrangement combining images e.g. side-by-side, superimposed or tiled
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00059—Operational features of endoscopes provided with identification means for the endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0605—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements for spatially modulated illumination
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0655—Control therefor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/31—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1076—Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions inside body cavities, e.g. using catheters
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B23/00—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
- G02B23/24—Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
- G02B23/2407—Optical details
- G02B23/2423—Optical details of the distal end
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00064—Constructional details of the endoscope body
- A61B1/00071—Insertion part of the endoscope body
- A61B1/0008—Insertion part of the endoscope body characterised by distal tip features
- A61B1/00082—Balloons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00194—Optical arrangements adapted for three-dimensional imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0661—Endoscope light sources
- A61B1/0676—Endoscope light sources at distal tip of an endoscope
Definitions
- the present invention relates to an endoscope system that displays a virtual scale for measuring the size of a subject and a method of operating the same.
- the distance to the subject or the size of the subject is acquired.
- the subject is irradiated with the illumination light and the measurement light, and the measurement light irradiation region such as the spot light is made to appear on the subject by the irradiation of the measurement light.
- a virtual scale for measuring the size of the subject is displayed on the image in correspondence with the position of the spot light.
- An object of the present invention is to provide an endoscope system capable of determining whether or not the length measurement mode can be executed by connecting an endoscope and a method of operating the endoscope system.
- the endoscope system of the present invention includes an endoscope and a processor device having an image control processor, and the image control processor measures the length of the endoscope when the endoscope is connected to the processor device. It is determined whether or not the endoscope is compatible, and if the endoscope is a length measuring endoscope, switching to the length measuring mode is enabled.
- the endoscope When the endoscope is a length-supporting endoscope, the endoscope can irradiate the measurement light, and it is possible to display a length-measurement image that displays a virtual scale based on the measurement light on the display.
- the image control processor can switch the measurement light ON or OFF and set the length measurement image display for the length measurement image by switching to the length measurement mode in the state where the switching to the length measurement mode is enabled.
- switching ON or OFF of the operation status display of the length measuring function indicating that the virtual scale is being displayed on the display and switching ON / OFF of the display of the virtual scale or changing the display mode. It is preferable to do at least one of the above.
- the image control processor switches the measurement light to ON, the length measurement image display setting to ON, the length measurement function operation status display to ON, and the virtual scale display to ON by switching to the length measurement mode. ..
- the image control processor turns on the measurement light, turns on the length measurement image display setting, turns on the length measurement function operation status display, and turns on the length measurement function operation status display.
- the image control processor preferably saves the image display setting before switching to the length measurement mode.
- the display mode of the virtual scale is changed by selecting from a plurality of scale patterns.
- the image control processor turns off the measurement light, turns off the length measurement image display setting, turns off the length measurement function operation status display, and turns off the virtual scale display by switching from the length measurement mode to another mode. It is preferable to switch.
- the image control processor preferably switches to the image display setting saved before switching to the length measurement mode.
- the present invention relates to an endoscope and a method of operating an endoscope system including a processor device having an image control processor, wherein the image control processor is used for endoscopy when the endoscope is connected to the processor device. It is determined whether or not the mirror is a length measuring endoscope, and if the endoscope is a length measuring endoscope, the switching to the length measuring mode is enabled.
- the present invention it is possible to determine whether or not the length measurement mode can be executed by connecting the endoscope.
- the endoscope system 10 includes an endoscope 12, a light source device 13, a processor device 14, a display 15, a user interface 16, an extended processor device 17, and an extended display 18.
- the endoscope 12 is optically connected to the light source device 13 and electrically connected to the processor device 14.
- the endoscope 12 has an insertion portion 12a to be inserted into the body to be observed, an operation portion 12b provided at the base end portion of the insertion portion 12a, and a curved portion 12c and a tip provided on the tip end side of the insertion portion 12a. It has a portion 12d.
- the curved portion 12c bends by operating the operating portion 12b.
- the tip portion 12d is directed in a desired direction by the bending motion of the bending portion 12c.
- the operation unit 12b includes an observation mode switching switch 12f used for switching the observation mode, a still image acquisition instruction switch 12g used for instructing the acquisition of a still image to be observed, and a zoom used for operating the zoom lens 21b.
- An operation unit 12h is provided.
- the processor device 14 is electrically connected to the display 15 and the user interface 16.
- the display 15 outputs and displays an image or information of an observation target processed by the processor device 14.
- the user interface 16 has a keyboard, a mouse, a touch pad, a microphone, and the like, and has a function of accepting input operations such as function settings.
- the expansion processor device 17 is electrically connected to the processor device 14.
- the expansion display 18 outputs and displays an image, information, or the like processed by the expansion processor device 17.
- the endoscope 12 has a normal observation mode, a special observation mode, and a length measurement mode, and can be switched by the observation mode changeover switch 12f.
- the normal observation mode is a mode in which the observation target is illuminated by the illumination light.
- the special observation mode is a mode in which the observation target is illuminated with special light different from the illumination light.
- the illumination light or the measurement light is illuminated on the observation target, and a virtual scale used for measuring the size of the observation target and the like is displayed on the subject image obtained by imaging the observation target.
- the subject image on which the virtual scale is not superimposed is displayed on the display 15, while the subject image on which the virtual scale is superimposed is displayed on the extended display 18.
- the screen of the display 15 freezes and also emits an alert sound (for example, "pee") to the effect that the still image is acquired.
- an alert sound for example, "pee"
- the still image of the subject image obtained before and after the operation timing of the still image acquisition instruction switch 12g is stored in the still image storage unit 42 (see FIG. 8) in the processor device 14.
- the still image storage unit 42 is a storage unit such as a hard disk or a USB (Universal Serial Bus) memory.
- the processor device 14 can be connected to the network, the still image of the subject image is stored in the still image storage server (not shown) connected to the network in place of or in addition to the still image storage unit 42. You may.
- the still image acquisition instruction may be given by using an operation device other than the still image acquisition instruction switch 12g.
- a foot pedal may be connected to the processor device 14, and a still image acquisition instruction may be given when the user operates the foot pedal (not shown) with his / her foot. You may use the foot pedal for mode switching.
- a gesture recognition unit (not shown) that recognizes a user's gesture is connected to the processor device 14, and when the gesture recognition unit recognizes a specific gesture performed by the user, a still image acquisition instruction is given. You may do it.
- the mode switching may also be performed using the gesture recognition unit.
- a line-of-sight input unit (not shown) provided near the display 15 is connected to the processor device 14, and the line-of-sight input unit recognizes that the user's line of sight is within a predetermined area of the display 15 for a certain period of time or longer. If this is the case, a still image acquisition instruction may be given.
- a voice recognition unit (not shown) may be connected to the processor device 14, and when the voice recognition unit recognizes a specific voice emitted by the user, a still image acquisition instruction may be given. The mode switching may also be performed using the voice recognition unit.
- an operation panel such as a touch panel may be connected to the processor device 14, and a still image acquisition instruction may be given when the user performs a specific operation on the operation panel. The mode switching may also be performed using the operation panel.
- the tip portion 12d includes an imaging optical system 21 that receives light from the subject, an illumination optical system 22 for irradiating the subject with illumination light, and measurement light used in the length measurement mode. It is provided with a measurement light emitting unit 23 for radiating light to the subject, an opening 24 for projecting the treatment tool toward the subject, and an air supply / water supply nozzle 25 for performing air supply / water supply.
- a balloon 19 as a fixing member is detachably attached to the insertion portion 12a.
- the balloon 19 is a disposable type balloon, which is discarded once or after a small number of uses, and is replaced with a new one.
- the number of times referred to here means the number of cases, and the small number of times of use means 10 times or less.
- the balloon 19 is formed in a substantially cylindrical shape whose end is narrowed by an elastic material such as rubber.
- the balloon 19 has a small-diameter tip 19a and a proximal 19b, and a central bulge 19c.
- the balloon 19 is fixed to the insertion portion 12a by inserting the insertion portion 12a inside and arranging the balloon 19 at a predetermined position, and then fitting the rubber rings 20a and 20b into the tip portion 19a and the base end portion 19b, for example. Ru.
- the predetermined position where the balloon 19 is fixed to the insertion portion 12a is on the proximal end side of the insertion portion 12a with respect to the bending portion 12c, and the tip of the balloon 19 at the tip portion 19a and the bending portion 12c. It is preferable that the position coincides with the base end. As a result, the balloon 19 does not interfere with the bending operation of the curved portion 12c, and the curved portion 12c does not interfere with the expansion or contraction of the balloon 19.
- the balloon 19 is expanded or contracted by the balloon control device BLC, as will be described later.
- the balloon control device BLC is preferably operated by the user interface 16.
- the insertion portion 12a is not fixed to the intestinal tract 26, and when the balloon is in the contracted state, the measurement light is irradiated and the image is taken.
- the position of the tip portion 12d may move in the vertical and horizontal directions, and the measurement light may not be accurately applied to the observation target that the user wants to measure.
- the balloon 19 is inflated under the control of the balloon control device BLC. Since the outer diameter of the inflated balloon 19 is formed to match the inner diameter of the intestine 26, the insertion portion 12a is in a state of being fixed in the intestinal tract 26. As a result, the insertion portion 12a is fixed to the intestinal tract 26, so that the observation target measured by the user can be accurately irradiated with the measurement light.
- the "fixed state” here includes a state in which the position of the insertion portion 12a with respect to the insertion direction is fixed, but the orientation of the tip portion 12d can be finely adjusted.
- the tip portion 12d of the endoscope has a substantially circular shape, and an imaging optical system 21 and an illumination optical system 22, an opening 24, and an air supply / water supply nozzle are provided along the first direction D1.
- Two illumination optical systems 22 are provided on both sides of the image pickup optical system 21 with respect to the second direction orthogonal to the first direction.
- the measurement light emitting unit 23 is provided between the image pickup optical system 21 and the air supply / water supply nozzle 25 in the first direction. Therefore, since the air supply port of the air supply / water supply nozzle 25 is directed to the image pickup optical system 21 and the measurement light emission unit 23, both the image pickup optical system 21 and the measurement light emission unit 23 are washed by air supply or water supply. can do.
- a tip cap 27 is attached to the tip portion 12d.
- the tip cap 27 is provided with a tip surface 28.
- the tip surface 28 has a flat surface 28a, a flat surface 28b, and a guide surface 28c.
- the plane 28a is a plane orthogonal to the axial direction Z.
- the plane 28b is parallel to the plane 28a and is located on the tip side of the plane 28a in the axial direction Z.
- the guide surface 28c is arranged between the plane 28a and the plane 28b.
- the flat surface 28b is provided with a through hole 27a for exposing the tip surface 21c of the imaging optical system 21 and a through hole 27b for exposing the tip surface 22b of the pair of illumination optical systems 22.
- the tip surface 21c, the tip surface 22b, and the flat surface 56b are arranged on the same surface.
- Through holes 27c and 27d are arranged on the plane 28a.
- the air supply / water supply nozzle 25 is exposed from the through hole 27c. That is, the plane 28a is the mounting position of the air supply / water supply nozzle 25 in the axial direction Z.
- An injection cylinder portion 25a is formed on the tip end side of the air supply / water supply nozzle 25.
- the injection cylinder portion 25a is formed in a tubular shape protruding from the base end portion of the air supply / water supply nozzle 25 in a direction of bending at, for example, 90 degrees, and has an injection port 25b at the tip thereof.
- the injection cylinder portion 25a is arranged so as to project from the through hole 52c toward the tip end side in the axial direction Z.
- the injection port 25b is arranged toward the image pickup optical system 21.
- the air supply / water supply nozzle 25 injects a cleaning liquid or a gas, which is a fluid, onto the tip surface 21c of the image pickup optical system 21 and its peripheral portion.
- the flow velocity F1 of the wash water at the position where the image pickup optical system 21 is reached, that is, the outer peripheral edge of the image pickup optical system 21 is 2 m / s or more.
- the gas flow velocity F2 at the outer peripheral edge of the imaging optical system 21 is preferably 40 m / s or more.
- the flow velocities F1 and F2 preferably satisfy the above values regardless of the orientation of the tip portion 12d. For example, when the air supply / water supply nozzle 25 is located vertically downward with respect to the image pickup optical system 21. The flow velocity decreases under the influence of the gravity of the washing water or gas, but even in this case, it is preferable that the above values are satisfied.
- the tip surface of the measurement light emitting unit 23 exposed from the through hole 27d is arranged. That is, the mounting position of the air supply / water supply nozzle 25 and the tip surface of the measurement light emission unit 23 are arranged at the same position in the axial direction Z.
- the measurement light emitting unit 23 is arranged within the fluid injection range of the air supply / water supply nozzle 25 and between the image pickup optical system 21 and the air supply / water supply nozzle 25.
- the measurement light emitting unit 23 when the tip surface 28 is viewed from the axial direction Z, the measurement light emitting unit 23 is arranged in a region connecting the injection port 25b of the air supply / water supply nozzle 25 and the outer peripheral edge of the image pickup optical system 21. ing.
- the fluid can be jetted to the measurement light emitting unit 23 at the same time.
- the guide surface 28c is formed by a continuous surface connecting the flat surface 28a and the flat surface 28b.
- the guide surface 28c is an inclined surface formed flat from a position in contact with the outer peripheral edge of the measurement light emitting unit 23 to a position in contact with the outer peripheral edge of the imaging optical system 21. Since the guide surface 28c is arranged in the fluid injection range of the air supply / water supply nozzle, when the fluid is injected from the air supply / water supply nozzle 25, the fluid is also injected to the guide surface 28c. The fluid jetted on the guide surface 28c diffuses and is sprayed on the image pickup optical system 21. In this case, the fluid injection range of the air supply / water supply nozzle may include the entire guide surface 28c or only a part of the guide surface 28c. In the present embodiment, the guide surface 28c is all included in the region connecting the injection port 25b of the air supply / water supply nozzle 25 and the outer peripheral edge of the image pickup optical system 21.
- the light source device 13 includes a light source unit 30 and a light source processor 31.
- the light source unit 30 generates illumination light or special light for illuminating the subject.
- the illumination light or special light emitted from the light source unit 30 is incident on the light guide LG and is applied to the subject through the illumination lens 22a.
- the light source unit 30 includes, as a light source of illumination light, a white light source that emits white light, or a plurality of light sources including a white light source and a light source that emits light of other colors (for example, a blue light source that emits blue light). Is used.
- the illumination light may be white mixed light that is a combination of at least one of purple light, blue light, green light, and red light. In this case, it is preferable to design the illumination optical system 22 so that the irradiation range of the green light is larger than the irradiation range of the red light.
- the light source processor 31 controls the light source unit 30 based on an instruction from the system control unit 41.
- the system control unit 41 gives an instruction regarding the light source control to the light source processor 31, and also controls the light source 23a (see FIG. 5) of the measurement light emission unit 23.
- the system control unit 41 controls to turn on the illumination light and turn off the measurement light in the normal observation mode.
- the special observation mode the special light is turned on and the measurement light is turned off.
- the system control unit 41 controls to turn on or off the illumination light or the measurement light.
- the illumination optical system 22 has an illumination lens 22a, and the light from the light guide LG is irradiated to the observation target through the illumination lens 22a.
- the image pickup optical system 21 includes an objective lens 21a, a zoom lens 21b, and an image pickup element 32.
- the reflected light from the observation target is incident on the image pickup device 32 via the objective lens 21a and the zoom lens 21b. As a result, a reflected image to be observed is formed on the image pickup device 32.
- the zoom lens 21b has an optical zoom function for enlarging or reducing the subject as a zoom function by moving between the telephoto end and the wide end.
- the optical zoom function can be switched on and off by the zoom operation unit 12h (see FIG. 1) provided in the operation unit 12b of the endoscope, and when the optical zoom function is ON, the zoom operation unit is further turned on. By operating 12h, the subject is enlarged or reduced at a specific magnification.
- the image sensor 32 is a color image sensor, which captures a reflected image of a subject and outputs an image signal.
- the image pickup device 32 is preferably a CCD (Charge Coupled Device) image pickup sensor, a CMOS (Complementary Metal-Oxide Semiconductor) image pickup sensor, or the like.
- the image pickup device 32 used in the present invention is a color image pickup sensor for obtaining a red image, a green image, and a red image of three colors of R (red), G (green), and B (blue).
- the red image is an image output from a red pixel provided with a red color filter in the image sensor 32.
- the green image is an image output from a green pixel provided with a green color filter in the image sensor 32.
- the blue image is an image output from a blue pixel provided with a blue color filter in the image sensor 32.
- the image pickup device 32 is controlled by the image pickup control unit 33.
- the image signal output from the image sensor 32 is transmitted to the CDS / AGC circuit 34.
- the CDS / AGC circuit 34 performs correlated double sampling (CDS (Correlated Double Sampling)) and automatic gain control (AGC (Auto Gain Control)) on an image signal which is an analog signal.
- CDS Correlated Double Sampling
- AGC Automatic gain control
- the image signal that has passed through the CDS / AGC circuit 34 is converted into a digital image signal by the A / D converter (A / D (Analog / Digital) converter) 35.
- the A / D converted digital image signal is input to the communication I / F (Interface) 37 of the light source device 13 via the communication I / F (Interface) 36.
- the system control unit 41 configured by the image control processor operates a program embedded in the program storage memory to connect to the communication I / F (Interface) 37 of the light source device 13 and to receive a signal.
- the functions of the processing unit 39 and the display control unit 40 are realized.
- the receiving unit 38 receives the image signal transmitted from the communication I / F 37 and transmits it to the signal processing unit 39.
- the signal processing unit 39 has a built-in memory for temporarily storing an image signal received from the receiving unit 38, and processes an image signal group which is a set of image signals stored in the memory to generate a subject image.
- the receiving unit 38 may directly send the control signal related to the light source processor 31 to the system control unit 41.
- the blue image of the subject image is on the B channel of the display 15, the green image of the subject image is on the G channel of the display 15, and the red image of the subject image is on the G channel.
- the signal allocation processing assigned to each of the R channels of the display a color subject image is displayed on the display 15. Also in the length measurement mode, the same signal allocation processing as in the normal observation mode is performed.
- the signal processing unit 39 when the special observation mode is set, the red image of the subject image is not used for the display of the display 15, and the blue image of the subject image is used for the B channel and the G channel of the display 15. By assigning the green image of the subject image to the R channel of the display 15, the pseudo-colored subject image is displayed on the display 15. Further, when the signal processing unit 39 is set to the length measurement mode, the signal processing unit 39 transmits a subject image including the irradiation position of the measurement light to the data transmission / reception unit 43. The data transmission / reception unit 43 transmits data related to the subject image to the expansion processor device 17. The data transmission / reception unit 43 can receive data or the like from the expansion processor device 17. The received data can be processed by the signal processing unit 39 or the system control unit 41.
- FIG. 9A shows a subject image in a state where the digital zoom function is OFF
- FIG. 9B shows a subject whose digital zoom function is ON, which is enlarged by cutting out the central portion of the subject image in FIG. 9A. The image is shown.
- the digital zoom function is OFF, the subject is not enlarged or reduced by cutting the subject image.
- the display control unit 40 displays the subject image generated by the signal processing unit 39 on the display 15.
- the system control unit 41 performs various controls on the endoscope 12, the light source device 13, the processor device 14, and the extended processor device 17.
- the image pickup device 32 is controlled via the image pickup control unit 33 provided in the endoscope 12.
- the image pickup control unit 33 also controls the CDS / AGC34 and the A / D35 in accordance with the control of the image pickup element 32.
- the expansion processor device 17 receives the data transmitted from the processor device 14 at the data transmission / reception unit 44.
- the signal processing unit 45 performs processing related to the length measurement mode based on the data received by the data transmission / reception unit 44. Specifically, the size of the virtual scale is determined from the subject image including the irradiation position of the measurement light, and the determined virtual scale is superimposed and displayed on the subject image.
- the display control unit 46 displays the subject image on which the virtual scale is superimposed on the extended display 18.
- the data transmission / reception unit 44 can transmit data or the like to the processor device 14.
- the measurement light emitting unit 23 emits the measurement light obliquely with respect to the optical axis Ax (see FIG. 13) of the imaging optical system 21.
- the measurement light emission unit 23 includes a light source 23a, a diffractive optical element DOE23b (Diffractive Optical Element), a prism 23c, and an emission unit 23d.
- the light source 23a emits light of a color that can be detected by the pixels of the image pickup element 32 (specifically, visible light), and is a light emitting element such as a laser light source LD (Laser Diode) or an LED (Light Emitting Diode). , Includes a condenser lens that collects the light emitted from this light emitting element.
- LD Laser Diode
- LED Light Emitting Diode
- the light source 23a is provided on a scope electric substrate (not shown).
- the scope electric board is provided at the tip end portion 12d of the endoscope, and receives power from the light source device 13 or the processor device 14 to supply power to the light source 23a.
- the light source 23a is provided at the tip end portion 12d of the endoscope, it may be provided inside a connector for connecting the endoscope 12 and the processor device 14. Even in this case, the members (diffraction optical element DOE23b, prism 23c, and emission unit 23d) other than the light source 23a among the measurement light emission units 23 are provided at the tip portion 12d of the endoscope.
- the wavelength of the light emitted by the light source 23a is, for example, a red (beam light color) laser light of 600 nm or more and 650 nm or less, but light in another wavelength band, for example, 495 nm or more and 570 nm or less. You may use the green light of.
- the light source 23a is controlled by the system control unit 41, and emits light based on an instruction from the system control unit 41.
- the DOE23b converts the light emitted from the light source into the measurement light for obtaining the measurement information.
- the amount of measured light may be adjusted from the viewpoint of protecting the human body, eyes, and internal organs, and the amount of light may be adjusted to such an extent that whiteout (pixel saturation) is sufficient in the observation range of the endoscope 12. preferable.
- the prism 23c is an optical member for changing the traveling direction of the measured light after conversion by the DOE23b.
- the prism 23c changes the traveling direction of the measurement light so as to intersect the field of view of the imaging optical system 21 including the objective lens 21a. The details of the traveling direction of the measurement light will also be described later.
- the measured light Lm emitted from the prism 23c is applied to the subject.
- the measurement light emitting unit 23 is housed in the measurement light emitting unit storage unit 47 provided at the tip end portion 12d of the endoscope.
- the storage unit 47 for the measurement light emitting unit has a hole corresponding to the size of the measurement light emitting unit 23.
- the storage portion 47 for the measurement light emitting portion is closed by the transparent lid 48.
- the transparent lid 48 has a transparent plate shape, and one end surface thereof is a flat portion 48a.
- the transparent lid 48 is arranged so that the flat portion 48a is flush with the tip surface 28 of the tip portion 12d.
- a prism 49 is arranged between the transparent lid 48 and the prism 23c.
- the prism 49 has a first contact surface 49a and a second contact surface 49b, the first contact surface 31a is in close contact with the prism 23c, and the second contact surface 49b is in close contact with the transparent lid 48.
- the prism 49 removes gas from between the transparent lid 48 and the prism 23c to make it airtight. By making it airtight in this way, dew condensation can be prevented. That is, it is possible to prevent problems such as attenuation, diffusion, convergence, and refraction of the measured light due to dew condensation.
- the prism 23c may be a measurement assist slit formed in the tip portion 12d of the endoscope.
- AR Anti-Reflection
- the prism 23c is composed of an optical member, it is preferable to apply an antireflection coating (AR (Anti-Reflection) coating) (antireflection portion) on the emission surface.
- AR Anti-Reflection
- the antireflection coat is provided by the irradiation position detection unit 61, which will be described later, when the measurement light is reflected without passing through the emission surface of the prism 23c and the ratio of the measurement light irradiated to the subject decreases. This is because it becomes difficult to recognize the position of the spot SP formed on the subject by the measurement light.
- the measurement light emitting unit 23 may be any as long as it can emit the measurement light toward the field of view of the image pickup optical system 21.
- the light source 23a may be provided in the light source device, and the light emitted from the light source 23a may be guided to the DOE 23b by an optical fiber.
- the measurement light Lm is emitted in the direction crossing the field of view of the image pickup optical system 21. It may be configured to be made to.
- the measurement light is emitted in a state where the optical axis Lm of the measurement light intersects the optical axis Ax of the imaging optical system 21.
- the measurement light Lm in the imaging range (indicated by arrows Qx, Qy, Qz) at each point. It can be seen that the positions of the spots SP formed on the subject (points where the arrows Qx, Qy, and Qz intersect with the optical axis Ax) are different.
- the shooting angle of view of the imaging optical system 21 is represented in the region sandwiched between the two solid lines 101X, and the measurement is performed in the central region (the region sandwiched between the two dotted lines 102X) having less aberration in the shooting angle of view. I have to.
- the third direction D3 is a direction orthogonal to the first direction D1 and the second direction D2 (the same applies to FIG. 45).
- the size of the subject can be measured from the movement of the spot position with respect to the change in the observation distance. can. Then, by taking an image of the subject illuminated by the measurement light with the image sensor 32, a subject image including the spot SP can be obtained.
- the position of the spot SP differs depending on the relationship between the optical axis Ax of the imaging optical system 21 and the optical axis Lm of the measurement light Lm and the observation distance, but if the observation distance is short, the same actual size ( For example, the number of pixels indicating 5 mm) increases, and the number of pixels decreases as the observation distance increases.
- the system control unit 41 includes a length measurement compatible endoscope availability determination unit 140, a measurement light ON / OFF switching unit 141, a length measurement image display setting ON / OFF switching unit 142, and a length measurement function operating status. It includes a display ON / OFF switching unit 143, a virtual scale display switching control unit 144, and a pre-switching image display setting storage unit 149.
- the length measuring compatible endoscope availability determination unit 140 determines whether or not the endoscope 12 is a length measuring compatible endoscope.
- the endoscope 12 is a length measuring compatible endoscope, switching to the length measuring mode is enabled.
- the length-measuring endoscope is capable of irradiating and receiving measurement light, and displays a length-measurement image displaying a virtual scale based on the measurement light on an extended display 18 (may be display 15). It is an endoscope that can be used.
- the length-measuring endoscope availability determination unit 140 has a scope ID provided on the endoscope 12 and a flag for the presence or absence of a length-measuring endoscope (for example, "1" in the case of a length-measuring endoscope). And, in the case of other endoscopes, it is set to "0") and has a scope ID table (not shown). Then, when the endoscope 12 is connected, the length-measurement-compatible endoscope availability determination unit 140 reads out the endoscope scope ID. It is determined whether or not the read scope ID is a length-measuring endoscope by referring to the flags in the scope ID table.
- the measurement light ON / OFF switching unit 141 controls the light source 23a to switch the measurement light on (ON) or off (OFF).
- the length measurement image display setting ON / OFF switching unit 142 can (ON) or cannot (OFF) execute various image display settings in the length measurement mode such as the display setting (color tone, etc.) of the length measurement image by the user interface 16 or the like. ).
- the virtual scale display switching control unit 144 switches the virtual scale to display (ON), non-display (OFF), or change the display mode on the extended display 18.
- the system control unit 41 switches the length measurement image display setting ON or OFF and displays the length measurement image by the operation of switching to the length measurement mode by the observation mode switching switch 12f. At least one of switching the setting ON or OFF, switching the length measuring function operation status display ON or OFF, switching the virtual scale ON / OFF, or changing the display mode is performed.
- the system control unit 41 can switch the measurement light to ON, the length measurement image display setting to ON, the length measurement function operation status display to ON, and the virtual scale display to ON by switching to the length measurement mode.
- the measurement light is turned off, the length measurement image display setting is turned off, the length measurement function operation status display is turned off, and the virtual scale is used. It is preferable to switch the display to OFF.
- the length measurement function operation status display is displayed in the incidental information display area 18a of the extended display 18 with the scale display icon 146.
- the scale display icon 146 is displayed, and according to the operation of switching from the length measurement mode to another mode, the scale display icon 146 is hidden.
- the virtual scale 147 is preferably displayed in the observation image display area 18b of the extended display 18. The display mode of the virtual scale 147 is changed by the virtual scale display switching control unit 144.
- the virtual scale 147 includes a 5 mm virtual scale 147a, a 10 mm virtual scale 147b, and a 20 mm virtual scale 147c.
- the virtual scales 147a, 147b, and 147c each have a circular scale (indicated by a dotted line) and a scale of a line segment (indicated by a solid line). “5” of the virtual scale 147a indicates that the scale is 5 mm, “10” of the virtual scale 147b indicates that the scale is 10 mm, and “20” of the virtual scale 147c indicates that the scale is 20 mm. ing.
- the display mode of the virtual scale is changed, for example, by selecting from a plurality of predetermined scale patterns.
- a plurality of scale patterns as shown in FIG. 15, in addition to a scale pattern in which three virtual scales 147a, 147b, and 147c having a circular scale and a line segment scale are combined, a circular scale and a line segment are used.
- the scale pattern is represented by one or a combination of a plurality of scale sizes, a plurality of scale shapes such as a circular scale and a line segment scale.
- the length measurement image display setting when the length measurement image display setting is turned on, it is preferable to save the image display setting before switching to the length measurement mode in the image display setting storage unit 149 before switching.
- the observation mode before switching to the length measurement mode is the normal observation mode
- the image display setting of the normal observation mode set in the signal processing unit 39 may be saved in the image display setting storage unit 149 before switching. preferable.
- the length measurement image display setting is turned off, it is preferable to switch to the image display setting saved in the image display setting storage unit 149 before switching.
- the signal processing unit 39 is stored in the image display setting storage unit 149 before switching according to the switching to the normal observation mode. Set to the saved image display setting of normal observation mode.
- the system control unit 41 turns on the measurement light, turns on the length measurement image display setting, and turns on the length measurement function operation status display when the mode switching condition is not satisfied. , And it is prohibited to switch the display of the virtual scale to ON.
- the mode switching condition is a setting condition for the endoscope 12, the light source device 13, the processor device 14, and the extended processor device 17, and is a condition suitable for executing the length measurement mode.
- the mode switching condition is preferably a condition that does not correspond to the following prohibition setting condition. If the mode switching condition is not satisfied, instead of prohibiting the display of the scale display icon 146, the length measurement function operation status disabled display indicating that the virtual scale 147 is not being displayed on the extended display 18 is displayed. Is preferably displayed (ON). It is preferable to display the length measurement function operation status impossibility display in the incidental information display area 18a with the scale non-display icon 148.
- the system control unit 41 is provided with a length measurement mode control unit 50 that controls whether or not the length measurement mode can be executed.
- the length measurement mode control unit 50 when the operation to switch to the length measurement mode is performed by the observation mode changeover switch 12f, the setting conditions currently being set for the endoscope 12, the light source device 13, and the processor device 14 are set. , When the setting change operation of the setting condition is performed by the user interface 16 in the first control for prohibiting the switching to the length measurement mode, the length measurement mode, when the predetermined prohibition setting condition is satisfied.
- the setting change of the setting condition is changed by the user interface 16 in the second control for disabling the setting change operation or in the length measurement mode.
- the setting condition to be changed by the setting change operation corresponds to the prohibited setting condition
- at least one of the third controls for automatically switching from the length measurement mode to another mode is performed. ..
- the setting conditions for the light source device 13 include the illumination conditions of the illumination light used in the normal observation mode or the length measurement mode, the illumination conditions of the special light used in the special observation mode, or the illumination conditions of the measurement light used in the length measurement mode. included.
- the lighting conditions include, for example, the amount of illumination light.
- the setting conditions relating to the endoscope 12 include imaging conditions relating to imaging of a subject.
- the imaging conditions include, for example, a shutter speed and the like.
- the setting conditions relating to the processor device 14 include processing conditions such as image processing relating to the subject image.
- the processing conditions include, for example, color balance, brightness correction, various enhancement processes, and the like.
- the position detection of the spot light SP is optimized, and the setting conditions (illumination light amount, shutter speed, color balance, brightness correction, various enhancement processing) that satisfy the visibility at the time of measuring the user's dimensions are set. It is preferable to do so.
- the prohibition setting conditions include the first prohibition setting condition that prevents the detection of the irradiation position of the measurement light from the subject image in the length measurement mode, and the second prohibition setting condition that prevents the virtual scale corresponding to the observation distance from being accurately displayed in the length measurement image. Prohibition setting conditions are included.
- the first prohibition setting condition includes, for example, a special observation mode, brightness enhancement or red enhancement for a subject image, and the like. In the special observation mode, the red image used for detecting the spot SP or the like in the length measurement mode is not used for image display, so that it is difficult to detect the irradiation position of the measurement light. In the length measurement mode, it is preferable to lower the brightness of the subject image and suppress the redness as compared with the normal observation mode or the special observation mode.
- the second prohibition setting condition includes, for example, the use (ON) of a zoom function such as an optical zoom function or a digital zoom function. This is because the virtual scale displayed in the measurement image is determined according to the position of the spot SP and not according to the magnification of the zoom function. Therefore, when the zoom function is ON, the virtual scale is the observation distance. This is because it is difficult to display in correspondence with.
- a zoom function such as an optical zoom function or a digital zoom function.
- the length measurement mode control unit 50 measures the length as shown in FIG.
- the first control for maintaining the state of the special observation mode is performed by prohibiting the switching to the long mode.
- the length measurement mode control unit 50 displays a message on the extended display 18 notifying that switching to the length measurement mode is prohibited (warning), as shown in FIG. You may make a sound).
- the length measuring mode control unit 50 may perform control to cancel the special observation mode and switch to the length measuring mode as shown in FIG. ..
- the length measurement mode control unit 50 when the setting change operation for turning on the zoom function is performed by the operation by the zoom operation unit 12h, the length measurement mode control unit 50 performs the zoom function as shown in FIG.
- the second control is performed to invalidate the setting change operation to be turned ON.
- the length measuring mode control unit 50 sends a message to the extended display 18 notifying that the setting change operation for turning on the zoom function has been invalidated, as shown in FIG. Display (may emit a warning sound).
- the length measurement mode control unit 50 when the setting change operation for turning on the zoom function is performed by the operation by the zoom operation unit 12h, the length measurement mode control unit 50 is set to the length measurement mode as shown in FIG. Is released, and as another mode, the third control for switching to the normal observation mode is performed.
- the third control when the third control is performed, the length measurement mode control unit 50 indicates that the length measurement mode has been canceled (the virtual scale is hidden) and switched to the normal observation mode, as shown in FIG. 23. The message to be notified is displayed on the extended display 18 (a warning sound may be emitted).
- the setting change operation for turning on the zoom function is enabled, and the subject on the subject image is enlarged or reduced by the zoom function.
- the system control unit 41 may be provided with a brightness information calculation unit 53, an illumination light amount level setting unit 54, a first light emission control table 55, and a second light emission control table 56. ..
- the brightness information calculation unit 53 calculates brightness information regarding the brightness of the subject based on the image obtained in the normal observation mode or the first captured image (image based on the illumination light and the measurement light) obtained in the length measurement mode. do.
- the illumination light amount level setting unit 54 sets the light amount level of the illumination light based on the brightness information.
- the light intensity level of the illumination light is set to 5 levels of Level1, Level2, Level3, Level4, and Level5.
- Information about the light intensity level of the illumination light is sent to the light source processor 31.
- the light source processor 31 controls the light source unit 30 so that the light amount of the illumination light is equal to the light amount level of the illumination light.
- the first light emission control table 55 is used for controlling the light amount of the measured light, and stores the first relationship between the coordinate information of the spot SP and the light amount level of the measured light. Specifically, as shown in FIG. 25, the light amount levels Level 1, Level 2, Level 3, Level 4, and Level 5 of the measured light are defined for each of the five coordinate areas to which the coordinate information of the spot SP belongs.
- the system control unit 41 refers to the first light emission control table 55 to specify the light amount level corresponding to the coordinate area to which the position of the spot SP belongs.
- the system control unit 41 controls the light source 23a to control the light amount of the measured light so as to reach the specified light amount level. Whether the light amount control of the measured light is performed by using the first light emission control table 55 or the second light emission control table 56 is appropriately set by operating the user interface 16.
- the coordinate area 1 is the area set at the lowermost position in the first captured image, and when the spot SP belongs to the coordinate area 1, the observation distance is the closest position. .. Therefore, Level 1 which is the smallest as the utility level of the measurement light is assigned to the coordinate area 1. Further, the coordinate area 2 is an area set above the coordinate area 1, and when the spot SP belongs to the coordinate area 2, the observation distance is farther than in the case of the coordinate area 1. , Level 2 larger than Level 1 is assigned as the light amount level of the measured light.
- the moving direction of the spot SP changes according to the crossing direction of the optical axis Ax of the imaging optical system 21 and the optical axis Lm of the measurement light.
- the coordinate area 3 is provided above the coordinate area 2.
- Level 3 larger than Level 2 is assigned as the light intensity level of the measured light.
- the coordinate area 4 is provided above the coordinate area 3.
- Level 4 larger than Level 3 is assigned as the light intensity level of the measured light.
- the coordinate area 5 is an area set at the uppermost position. When the spot SP belongs to the coordinate area 5, since the observation distance is at the farthest position compared to the cases of the other coordinate areas 1 to 4, Level 5 having the highest light intensity level of the measured light is assigned. ..
- the second light emission control table 56 is used for controlling the light amount of the measured light, and stores the coordinate information of the spot SP and the second relationship between the light amount level of the illumination light and the light amount level of the measured light. Specifically, as shown in FIG. 27, the light amount level of the measured light is determined for each of the five coordinate areas to which the coordinate information of the spot SP belongs and the light amount level of the illumination light Level1, Level2, Level3, Level4, and Level5. Has been done. For example, when the spot SP belongs to the coordinate area 1 and the light amount level of the illumination light is Level 3, the level 3 is assigned as the light amount level of the measurement light.
- the system control unit 41 specifies the light amount level of the measured light from the coordinate area to which the position of the spot SP belongs and the light amount level of the illumination light with reference to the second light emission control table 56.
- the system control unit 41 controls the light source 23a to control the light amount of the measured light so as to have a specified light amount level.
- the light amount level of the illumination light and the light amount level of the measurement light are set to a ratio for making it possible to specify the position of the spot SP. This is because if the ratio of the light amount of the illumination light to the light amount of the measurement light is not appropriate, the contrast of the spot SP becomes low and it becomes difficult to specify the position of the spot SP.
- the light source processor 31 continuously emits the illumination light used for the overall illumination of the observation target, while the measurement light Lm emits a pulse. Therefore, as a frame that emits light in the length measurement mode, as shown in FIG. 28, an illumination light single emission frame FLx that does not emit measurement light but emits illumination light independently, and an illumination light and measurement light are emitted.
- the measurement light emission frame FLy to be measured is included.
- the position of the spot SP is detected from the first captured image obtained in the measurement light emission frame FLy, while the virtual scale is obtained with respect to the second captured image obtained in the illumination light single emission frame FLx. Is displayed.
- the period in which the solid line corresponds to "on” indicates the period in which the illumination light or the measurement light is emitted, and the period in which the solid line corresponds to "off” indicates the period in which the bright light or the measurement light is not emitted. Is shown.
- the light emission and imaging patterns in the length measurement mode are as follows.
- the first pattern is a case where a CCD (global shutter type image sensor) that outputs an image signal by performing exposure and reading out charges at the same timing in each pixel is used as the image sensor 32. Further, in the first pattern, the measurement light is emitted at a specific frame interval every two frame intervals.
- CCD global shutter type image sensor
- the first pattern as shown in FIG. 29, when switching between the normal observation mode and the length measurement mode (when switching from the timing T1 to the timing T2) based on the exposure of the illumination light at the timing T1 in the normal observation mode.
- the second captured image N By performing the simultaneous reading of the electric charge (global shutter), the second captured image N having only the component of the illumination light is obtained.
- the second captured image N is displayed on the extended display 18 at the timing T2.
- the rising line 57 rising in the vertical direction at the time of switching from the timing T1 to the timing T2 indicates that the global shutter was performed. This also applies to the other rising lines 57.
- the illumination light and the measurement light are emitted. Based on the exposure of the illumination light and the measurement light at the timing T2, the charge is simultaneously read out at the time of switching from the timing T2 to the timing T3, so that the first captured image N + Lm containing the components of the illumination light and the measurement light is generated. can get.
- the position of the spot SP is detected based on the first captured image N + Lm.
- the virtual scale corresponding to the position of the spot SP is displayed with respect to the second captured image N displayed at the timing T2.
- the length measurement image S displaying the virtual scale is displayed with respect to the second captured image N at the timing T2.
- the second captured image N of the timing T2 (first timing) is displayed on the extended display 18 not only at the timing T2 but also at the timing T3. That is, the second captured image at the timing T2 is displayed continuously for two frames until the timing T4 (second timing) at which the next second captured image is obtained (at the timing T2 and T3, the same subject image is displayed). Is displayed).
- the first captured image N + Lm is not displayed on the extended display 18.
- the second captured image N is displayed by changing it for each frame, whereas in the first pattern of the length measurement mode, the same second captured image N2 is displayed as described above. By displaying 2 frames in succession, the frame rate of the first pattern in the length measurement mode becomes substantially 1/2 of that in the normal observation mode.
- timing T4 and later The same applies to timing T4 and later. That is, at the timings T4 and T5, the second captured image of the timing T4 is continuously displayed with respect to the length measurement image S, and at the timings T6 and T7, the second timing T6 is displayed with respect to the length measurement image S. 2
- the captured images N are continuously displayed.
- the first captured image N + Lm is not displayed on the extended display 18. In this way, by displaying the second captured image N that does not include the component of the measurement light on the display of the length measurement image S, the frame rate is slightly reduced, but it may be caused by the emission of the measurement light. It does not interfere with the visibility of the observation target.
- the image sensor 32 has a plurality of lines for imaging an observation target illuminated by illumination light or measurement light, and each line is exposed at a different exposure timing, and each line is different. This is a case where CMOS (rolling shutter type image sensor) that reads out the charge at the read timing and outputs an image signal is used. Further, in the second pattern, the measurement light is emitted at a specific frame interval every three frame intervals.
- CMOS rolling shutter type image sensor
- the second captured image N having only the component of the illumination light is obtained.
- the second captured image N is displayed on the extended display 18 at the timing T2.
- the diagonal line 58 represents the light exposure and charge readout timing, and the exposure and charge readout are started at the line Ls and exposed at the line Lt. And the reading of the charge is completed.
- the illumination light and the measurement light are emitted.
- the components of the illumination light and the measurement light are included at the timing of switching from the timing T2 to the timing T3 by performing the rolling shutter based on the illumination of the illumination light from the timing T1 to the timing T2 and the illumination of the measurement light at the timing T2.
- the first captured image N + Lm is obtained. Further, even at the timing of switching from the timing T3 to the timing T4, the first captured image N + Lm including the components of the illumination light and the measurement light can be obtained.
- the position of the spot SP is detected based on the above first captured image N + Lm. Further, at the timings T3 and T4, the measurement light is not emitted.
- the virtual scale corresponding to the position of the spot SP is displayed with respect to the second captured image N displayed at the timing T2.
- the length measurement image S on which the virtual scale is displayed is displayed with respect to the second captured image N at the timing T2.
- the second captured image N of the timing T2 (first timing) is displayed on the extended display 18 not only at the timing T2 but also at the timings T3 and T4. That is, the second captured image of the timing T2 is displayed continuously for three frames until the timing T5 (second timing) at which the next second captured image is obtained (the same subject at the timings T2, T3, and T4). The image is displayed).
- the first captured image N + Lm is not displayed on the extended display 18.
- the same second captured image N2 is continuously displayed for three frames, so that the frame rate of the first pattern of the length measurement mode is substantially 1/3 of that of the normal observation mode. Become.
- timing T5 and later the same applies to timing T5 and later.
- the second captured image of the timing T5 is displayed with respect to the length measurement image S.
- the first captured image N + Lm is not displayed on the extended display 18. In this way, by displaying the second captured image that does not include the component of the measurement light on the display of the length measurement image S, the frame rate is lowered, but it may be caused by the emission of the flat measurement light. It does not interfere with the visibility of the observation target.
- the signal processing unit 45 of the expansion processor device 17 detects the position of the spot SP in the captured image in order to recognize the position of the spot SP and set the virtual scale. And a second signal processing unit 60 that sets a virtual scale according to the position of the spot SP.
- the captured image includes both the illumination light and the measurement light when the illumination light is always on while the measurement light is turned on or off. The first captured image obtained at the time of lighting is included.
- the first signal processing unit 59 includes an irradiation position detection unit 61 that detects the irradiation position of the spot SP from the captured image.
- the irradiation position detection unit 61 it is preferable to acquire the center of gravity position coordinates of the spot SP as the irradiation position of the spot SP.
- the second signal processing unit 60 sets the first virtual scale as a virtual scale for measuring the size of the subject based on the irradiation position of the spot SP, and sets the scale display position of the first virtual scale. ..
- the second signal processing unit 60 refers to the scale table 62 that stores the virtual scale image whose display mode differs depending on the irradiation position of the spot SP and the irradiation position of the spot in association with the irradiation position of the spot, and refers to the spot SP.
- the virtual scale differs in size or shape, for example, depending on the irradiation position of the spot SP and the marker display position. The display of the virtual scale image will be described later.
- the scale table 62 stores the virtual scale image in association with the irradiation position, and the distance to the subject corresponding to the irradiation position (distance between the tip portion 12d of the endoscope 12 and the subject) and the virtual scale image. You may memorize it in association with.
- the virtual scale image is required for each irradiation position, and the data capacity becomes large. It is preferable to hold it in the extended processor device 17 (or the processor device 14) rather than holding it in a memory (not shown) in the endoscope 12. Further, as described later, the virtual scale image is created from the representative points of the virtual scale image obtained by calibration, but if the virtual scale image is created from the representative points in the length measurement mode, a loss time occurs. , The real-time property of processing is impaired. Therefore, after the endoscope 12 is connected to the endoscope connection portion, a virtual scale image is once created from the representative points, and the scale table 62 is updated, the virtual scale is not created from the representative points. The virtual scale image is displayed using the updated scale table 62.
- the irradiation position of the spot SP and the actual size of the subject correspond to the irradiation position of the spot SP instead of the virtual scale image to be superimposed and displayed on the length measurement image.
- a reference marker that determines the size of the marker is displayed on the length measurement image in relation to the number of pixels.
- the second signal processing unit 60 includes a table updating unit 64 for updating the scale table 62 when the endoscope 10 is connected to the endoscope connecting unit.
- the reason why the scale table 62 can be updated in this way is that the endoscope 12 has a different positional relationship between the optical axis Lm of the measurement light and the image pickup optical system 21 depending on the model and serial number, and accordingly. This is because the display mode of the virtual scale image also changes.
- the representative point data table 66 that stores the representative point data related to the representative points extracted from the virtual scale image in association with the irradiation position is used. Details of the table update unit 64 and the representative point data table 66 will be described later.
- the representative point data table 66 may be stored in association with the distance from the subject corresponding to the irradiation position (distance between the tip portion 12d of the endoscope 12 and the subject) and the representative point data.
- the display control unit 46 controls the display mode of the virtual scale to be different depending on the irradiation position of the spot SP and the marker display position when the length measurement image in which the virtual scale is superimposed on the captured image is displayed on the extended display 18. .. Specifically, the display control unit 46 displays the length measurement image on which the first virtual scale is superimposed centering on the spot SP on the extended display 18.
- the first virtual scale for example, a circular measurement marker is used. In this case, as shown in FIG. 32, when the observation distance is close to the near end Px (see FIG. 13), the actual size is 5 mm (captured image) in line with the center of the spot SP1 formed on the tumor tm1 of the subject.
- a virtual scale M1 indicating is displayed.
- the virtual scale M1 Since the marker display position of the virtual scale M1 is located in the peripheral portion of the captured image affected by the distortion caused by the imaging optical system 21, the virtual scale M1 has an elliptical shape according to the influence of the distortion or the like. .. Since the above marker M1 substantially coincides with the range of the tumor tm1, the tumor tm1 can be measured to be about 5 mm. It should be noted that the spot may not be displayed on the captured image, and only the first virtual scale may be displayed.
- the actual size is 5 mm (horizontal and vertical directions of the captured image) in accordance with the center of the spot SP2 formed on the tumor tm2 of the subject.
- the indicated virtual scale M2 is displayed. Since the marker display position of the virtual scale M2 is located in the center of the captured image that is not easily affected by distortion by the imaging optical system 21, the virtual scale M2 is circular without being affected by distortion or the like. ing.
- a virtual scale M3 showing an actual size of 5 mm (horizontal and vertical directions of the captured image) is displayed along with the center of the spot SP3 formed on the tumor tm3 of the subject. Since the marker display position of the virtual scale M3 is located in the peripheral portion of the captured image affected by the distortion caused by the imaging optical system 21, the virtual scale M3 has an elliptical shape according to the influence of the distortion or the like. .. As shown in FIGS. 32 to 34 above, the size of the first virtual scale corresponding to the same actual size of 5 mm becomes smaller as the observation distance becomes longer. Further, the shape of the first virtual scale differs depending on the marker display position according to the influence of the distortion caused by the imaging optical system 21.
- the center of the spot SP and the center of the marker are displayed so as to coincide with each other.
- the first virtual scale is placed at a position away from the spot SP. It may be displayed. However, even in this case as well, it is preferable to display the first virtual scale in the vicinity of the spot. Further, instead of deforming and displaying the first virtual scale, the first virtual scale in a state where the distortion aberration of the captured image is corrected and not deformed may be displayed on the corrected captured image.
- the first virtual scale corresponding to the actual size of the subject of 5 mm is displayed, but the actual size of the subject is an arbitrary value (for example, 2 mm, depending on the observation target and the observation purpose). 3 mm, 10 mm, etc.) may be set.
- the first virtual scale has a substantially circular shape, but as shown in FIG. 35, it may be a cross shape in which vertical lines and horizontal lines intersect. Further, a graduated cross shape in which a scale Mx is added to at least one of the vertical line and the horizontal line of the cross shape may be used.
- the first virtual scale a distorted cross shape in which at least one of a vertical line and a horizontal line is tilted may be used.
- the first virtual scale may be a circle in which a cross shape and a circle are combined and a cross shape.
- the first virtual scale may be a measurement point cloud type in which a plurality of measurement point EPs corresponding to the actual size from the spot are combined.
- the number of the first virtual scale may be one or a plurality, and the color of the first virtual scale may be changed according to the actual size.
- the first virtual scale As the first virtual scale, as shown in FIG. 36, three concentric virtual scales M4A, M4B, and M4C (the sizes are 2 mm, 5 mm, and 10 mm in diameter, respectively) having different sizes are placed on the tumor tm4.
- the spot SP4 formed on the center of the spot SP4 may be displayed on the captured image. Since these three concentric virtual scales display a plurality of virtual scales, the trouble of switching can be saved, and measurement is possible even when the subject has a non-linear shape.
- When displaying multiple concentric virtual scales centered on the spot instead of specifying the size and color for each virtual scale, prepare a combination of multiple conditions in advance and select from the combinations. You may be able to do it.
- FIG. 36 all three concentric virtual scales are displayed in the same color (black), but when displaying a plurality of concentric markers, a plurality of colored concentric markers whose colors are changed by the virtual scales are displayed. May be.
- the virtual scale M5A is represented by a dotted line representing red
- the virtual scale M5B is represented by a solid line representing blue
- the virtual scale M5C is represented by a alternate long and short dash line representing white.
- the first virtual scale in addition to a plurality of concentric virtual scales, as shown in FIG. 38, a plurality of distorted concentric virtual scales in which each concentric circle is distorted may be used.
- the distorted concentric virtual scales M6A, virtual scales M6B, and virtual scales M6C are displayed in the captured image centering on the spot SP5 formed on the tumor tm5.
- the table update unit 64 refers to the representative point data table 66 and displays a virtual scale image corresponding to the model and / or serial number of the endoscope 12. Is created and the scale table 62 is updated.
- the representative point data table 66 stores the representative point data regarding the representative points of the virtual scale image obtained at the time of calibration in association with the irradiation position of the spot SP.
- the representative point data table 66 is created by the calibration method described later.
- the representative point data the coordinate information (X coordinate, Y coordinate) of the representative point RP obtained by extracting some points from the circular virtual scale image M which is a virtual scale image is used.
- the representative point data stored in the representative point data table 66 is data when the positional relationship between the optical axis Lm of the measurement light and the imaging optical system 21 is the default positional relationship.
- the table update unit 64 acquires information on the positional relationship between the optical axis Lm of the measurement light and the imaging optical system 21, and the positional relationship and the representative point.
- the scale table 62 is updated using the data table 66.
- the coordinates of the representative point RP are based on the difference between the positional relationship between the optical axis Lm of the measurement auxiliary light in the endoscope 12 connected to the endoscope connecting portion and the imaging optical system 29b and the default positional relationship. Calculate the difference value of the information. Then, as shown in FIG.
- the table update unit 64 creates a virtual scale image M * based on the representative point RP * whose coordinates are shifted by the calculated difference value from the default representative point RP.
- the table update unit 64 When creating a virtual scale image, it is preferable to perform interpolation processing for connecting the representative points RP * .
- the created virtual scale image after interpolation processing is associated with the irradiation position by the table update unit 64. This completes the update of the scale table 62. In FIG. 40, only a part of the representative points RP and RP * is coded.
- the measurement light the light formed as a spot when the subject is irradiated is used, but other light may be used.
- a planar measurement light formed as an intersecting line 67 on the subject may be used.
- a second virtual scale consisting of a scale 68 as an index of the size of the subject (for example, polyp P) is generated on the intersection line 67 and the intersection line 67.
- the irradiation position detection unit 61 detects the position of the intersection line 67 (irradiation position of the measurement light).
- the second spot SPk2 is smaller than the first spot SPk1, and the interval between the second spots SPk2 is small. Therefore, the specific intersection curve SCC is formed on the intersection curve by the plurality of second spots SPk2. Measurement information is calculated based on the position of the specific intersection curve SCC.
- the signal processing unit 45 of the expansion processor device 17 has a position specifying unit 69 and measurement information processing in order to perform position recognition of the first spot SPk1 or the second spot SPk2 and calculation of measurement information. It has a unit 70.
- the position specifying unit 69 identifies the position of the first spot SPk1 or the second spot SPk2 from the captured image.
- the captured image is binarized, and the center of gravity of the white portion (pixel whose signal intensity is higher than the binarization threshold) in the binarized image is set to the first spot SPk1 or the second spot. It is specified as the position of SPk2.
- the measurement information processing unit 70 calculates measurement information from the position of the first spot SPk1 or the second spot SPk2.
- the calculated measurement information is displayed on the captured image by the display control unit 46.
- the measurement information can be accurately calculated even in a situation where the subject has a three-dimensional shape.
- the measurement information includes a first straight line distance indicating a straight line distance between the two first spot SPk1 and the second spot SPk2.
- the measurement information processing unit 70 calculates the first straight line distance by the following method. As shown in FIG. 45, the measurement information processing unit 70 obtains coordinates (xp1, yp1, zp1) indicating the actual size of the first spot SPk1 based on the position of the first spot SPk1. For xp1 and xp2, the coordinates corresponding to the actual size are obtained from the coordinates of the position of the first spot SPk1 in the captured image, respectively.
- zp1 obtains the coordinates corresponding to the actual size from the coordinates of the position of the first spot SPk1 and the coordinates of the position of the specific spot SPk determined in advance.
- the coordinates (xp2, yp2, zp2) indicating the actual size in the second spot SPk2 are obtained.
- xp2 and yp2 obtain the coordinates corresponding to the actual size from the coordinates of the position of the second spot SPk2 in the captured image, respectively.
- zp2 obtains the coordinates corresponding to the actual size from the coordinates of the position of the second spot SPk2 and the coordinates of the position of the specific spot SPk determined in advance.
- the calculated first straight line distance is displayed as measurement information 71 (“20 mm” in FIG. 44) on the captured image.
- the specific spot SPk may or may not be displayed on the extended display 18.
- the measurement light a plurality of spot lights arranged in a grid pattern at predetermined intervals in the vertical direction and the horizontal direction may be used.
- the image of the diffraction spot DS1 is acquired by imaging the tumor tm or the like in the subject with the spot light arranged in a grid pattern.
- the signal processing unit 45 of the expansion processor device 17 measures the interval DT of the diffraction spot DS1.
- the interval DT corresponds to the number of pixels on the image plane of the image sensor 32. It should be noted that the interval in a specific portion of the plurality of diffraction spots DS1 (for example, the interval near the center of the image plane) may be measured.
- the direction and distance to the subject are calculated based on the measurement result.
- the relationship between the interval (number of pixels) of the diffraction spot DS1 and the distance to the subject is used.
- the direction ( ⁇ , ⁇ ) and the distance (r) of the diffraction spot DS1 to be measured are calculated.
- the two-dimensional information or the three-dimensional information of the subject is calculated based on the calculated direction and distance.
- the shape, size, area, etc. in the two-dimensional space XY plane of FIG.
- the signal processing unit 45 of the expansion processor device 17 recognizes the position of the spot and sets the virtual scale for the spot in the first captured image (image based on the measured light and the illumination light).
- a position specifying unit 72 that specifies the position of S, and an image processing unit 73 that processes a first captured image or a second captured image (an image based on illumination light) to generate a length measurement image based on the position of the spot SP. And have.
- the position specifying unit 72 has a noise component removing unit 74 that removes a noise component that hinders the identification of the position of the spot SP. If the first captured image contains a color (approximate color of the measured light) that is different from the color of the measured light forming the spot SP but is close to the color of the measured light, the position of the spot SP is accurately positioned. It may not be possible to specify. Therefore, the noise component removing unit 74 removes the component of the color close to the measured light as a noise component from the first captured image. The position specifying unit 72 identifies the position of the spot SP based on the noise-removed first captured image from which the noise component has been removed.
- the noise component removing unit 74 includes a color information conversion unit 75, a binarization processing unit 76, a mask image generation unit 77, and a removal unit 78.
- the color information conversion unit 75 converts the first captured image, which is an RGB image, into a first color information image, and converts the second captured image, which is an RGB image, into a second color information image.
- the color information for example, HSV (H (Hue (hue)), S (Saturation (saturation)), V (Value (brightness)) is preferable.
- the color difference Cr and Cb may be used as the color information.
- the binarization processing unit 76 binarizes the first color information image into a binarized first color information image, and binarizes the second color information image into a binarized second color information image.
- the threshold value for binarization is the threshold value for binarization including the color of the measurement auxiliary light.
- the binarized first color information image includes the color information 80 of the noise component in addition to the color information 79 of the measurement light.
- the binarized first color information image includes the color information 80 of the noise component.
- the mask image generation unit 77 removes the color information of the noise component from the first captured image based on the binarized first color information image and the binarized second color information image, and also removes the color information of the noise component from the first captured image, and also, the color of the measured light. Generate a mask image to extract information. As shown in FIG. 52, the mask image generation unit 77 identifies a region 81 of the noise component having a noise component from the noise component included in the binarized second captured image. It is preferable that the region 81 of the noise component is larger than the region occupied by the color information 80 of the noise component. This is because when camera shake or the like occurs, the area of the color information 80 of the noise component becomes larger than when there is no camera shake or the like.
- the mask image generation unit 77 serves as an extraction region for extracting color information from the region of the color information 79 of the measurement light in the binarized first color information image, and is a noise component region 81. Generates a mask image that is a non-extracted area that does not extract color information.
- FIGS. 50 to 53 are schematically shown for the purpose of explaining the binarized first color information image, the binarized second color information image, the noise component region, and the mask image.
- the removing unit 78 extracts the color information from the first color information image using the mask image, so that the color information of the noise component is removed and the color information of the measurement auxiliary light is extracted.
- a one-color information image can be obtained.
- the noise-removed first color information image becomes a noise-removed first captured image by performing an RGB conversion process for returning the color information to an RGB image.
- the position specifying unit 72 identifies the position of the spot SP based on the noise-reduced first captured image. Since the noise component is removed from the noise-removed second captured image, the position of the spot SP can be accurately specified.
- the image processing unit 73 has an image selection unit 82 and a scale table 62.
- the image selection unit 82 selects a processing target image, which is a target image to be processed based on the position of the spot SP, from the first captured image or the second captured image.
- the image processing unit 73 performs processing based on the position of the spot SP on the image selected as the image to be processed.
- the image selection unit 82 selects the image to be processed based on the state regarding the position of the spot SP.
- the image selection unit 80 may select the image to be processed according to the instruction from the user. For example, the user interface 16 is used for the instruction by the user.
- the spot SP when the spot SP is within a specific range during a specific period, it is considered that there is little movement in the subject or the tip portion 12d of the endoscope, so that the second captured image is used as the image to be processed. select.
- the lesion can be easily aligned with the lesion included in the subject even without the spot SP.
- the second captured image does not include the color component of the measurement auxiliary light, the color reproducibility of the subject is not impaired.
- the position of the spot SP is not within the specific range during the specific period, it is considered that the subject or the tip portion 12 of the endoscope has a large movement, so the first captured image is selected as the image to be processed. do.
- the user operates the endoscope 12 so that the spot SP is located at the lesion portion. This facilitates alignment with the lesion.
- the image processing unit 73 generates a first virtual scale indicating the actual size of the subject as a virtual scale based on the position of the spot SP in the first captured image.
- the image processing unit 73 refers to the scale table 62 that stores the relationship between the position of the spot SP in the first captured image and the first virtual scale indicating the actual size of the subject, and from the position of the spot SP to the virtual scale. Calculate the size. Then, the image processing unit 73 generates a first virtual scale corresponding to the size of the virtual scale.
- the signal processing unit 45 of the expansion processor device 17 detects the position of the spot SP in the captured image in order to recognize the position of the spot SP and set the virtual scale. And a second signal processing unit 60 that sets a virtual scale according to the position of the spot SP.
- the first signal processing unit 84 includes a mask processing unit 86, a binarization processing unit 87, a noise component removing unit 88, and an irradiation position detection unit 89.
- the process of removing the noise component in the first signal processing unit 84 will be described with reference to FIGS. 55 to 57.
- the mask processing unit 86 has a substantially parallel quadrilateral illumination position movable range Wx indicating within the movable range of the illumination position of the measurement light on the subject with respect to the red image, the green image, and the blue image among the captured images. Perform masking to extract. As a result, as shown in FIGS.
- a red image PRx, a green image PGx, and a blue image PBx after mask processing from which the illumination position movable range Wx is extracted can be obtained.
- the noise component is removed from the pixels within the illumination position movable range, and the irradiation position of the spot SP is detected.
- the binarization processing unit 87 performs the first binarization processing on the pixels in the illumination position movable range of the red image PRx after the mask processing, thereby performing the binarization red image PRy (binary).
- the first spectroscopic image) is obtained.
- a pixel having a pixel value of "225" or more is set to "1"
- a pixel having a pixel value of less than "225” is set to "0". ..
- the spot SP which is a component of the measurement auxiliary light, is detected.
- the threshold condition is a condition related to a threshold indicating the boundary between the pixel value of the pixel set to "0" by binarization and the pixel value of the pixel set to "1" by binarization, and "1" by binarization. It refers to a condition that determines the range of the pixel value of the pixel set to "0” and the range of the pixel value of the pixel set to "1" by binarization.
- the noise component removing unit 88 uses the binarized red image PRy and the binarized green image PGy (binary) obtained by binarizing the green image PGx by the second binarization process.
- the first difference processing with the binarized second spectroscopic image) is performed.
- the first noise component N1 is removed from the first difference image PD1 obtained by the first difference processing.
- the second noise component N2 often remains and is not removed. For pixels that are "0" or less by the first difference processing, the pixel value is set to "0".
- the threshold condition for the second binarization process the pixel whose pixel value is in the range of "30" to “220” is set to "1", and the other range, that is, , "0" to less than “30” or more than “220” is defined as "0".
- the first noise component is removed by the first difference processing between the binarized red image and the binarized green image, but the first noise component may be removed by another first arithmetic processing.
- the noise component removing unit 88 binarizes the first difference image PD1 and the blue image PBx by the third binarization process.
- the second difference processing with the binarized blue image PBy is performed.
- the second difference image PD2 obtained by the second difference processing the second noise component that was difficult to remove by the first difference processing is removed.
- the pixel value is set to "0" for the pixels that become "0" or less by the second difference processing.
- a threshold condition for the third binarization process a pixel having a pixel value of "160” or more is set to "1", and a pixel having a pixel value of less than "160” is set to "0".
- the second is component is removed by the second difference process between the first difference image and the binarized blue image, the second noise component may be removed by another second arithmetic process.
- the irradiation position detection unit 89 detects the irradiation position of the spot SP from the first difference image or the second difference image. It is preferable that the irradiation position detection unit 89 acquires the coordinates of the center of gravity of the spot SP as the irradiation position of the spot SP.
- the second signal processing unit 85 sets a first virtual scale indicating the actual size of the subject as a virtual scale based on the position of the spot SP.
- the second signal processing unit 85 calculates the size of the virtual scale from the position of the spot with reference to the scale table 62 that stores the relationship between the position of the spot SP and the first virtual scale indicating the actual size of the subject. do. Then, the second signal processing unit 85 sets the first virtual scale corresponding to the size of the virtual scale.
- the signal processing unit 45 of the expansion processor device 17 includes an irradiation area recognition unit 90 and a second signal processing unit 60.
- the irradiation area recognition unit 90 recognizes a measurement light irradiation area having a pattern of a specific shape from the captured image.
- the pattern of the specific shape includes the white central region CR1 and the peripheral region SR1 that covers the periphery of the central region and has a feature amount based on the measured light. If the measurement light irradiation region is the spot SP described above, the pattern having a specific shape has a circular shape. In this case, the white central region CR1 is circular and the peripheral region SR1 is ring-shaped.
- FIG. 60 shows the distribution of the pixel values of each color image in the captured image including the red image RP, the green image GP, and the blue image BP as a plurality of color images. Since the pixel values of the red image RP, the green image GP, and the blue image BP in the central region CR1 have reached the maximum pixel value (for example, 255), the central region CR1 is white. In this case, when the measurement light is incident on the image pickup element 32, as shown in FIG. 61, in the wavelength range WMB of the measurement light, not only the red color filter RF of the image pickup element 32 but also the green color filter GF and The measured light is transmitted at the maximum transmittance of the blue color filter BF.
- the maximum pixel value for example, 255
- the peripheral region SR1 the pixel value of the red image RP is larger than the pixel value of the green image GP or the blue image BP. Therefore, the peripheral region SR1 is reddish.
- the pixel values of the red image RP, the green image GP, and the blue image BP in the central region CR1 are set to the maximum pixel values by emitting the measured light amount with a specific light amount.
- the irradiation area recognition unit 90 can recognize the spot SP having the above-mentioned specific shape and feature amount. Specifically, as shown in FIG. 62, the irradiation area recognition unit 90 outputs a learning model 91 that recognizes the spot SP by outputting the spot SP, which is the measurement light irradiation area, in response to the input of the captured image. It is preferable to have.
- the learning model 91 is machine-learned by a large number of teacher data associated with the captured image and the already recognized measured light irradiation region. As machine learning, it is preferable to use CNN (Convolutional Neural Network).
- the spot SP By recognizing the spot SP using the learning model 91, not only the circular spot SP composed of the circular central region CR1 and the ring-shaped peripheral region SR1 (see FIG. 59) but also a specific shape. It is also possible to recognize the spot SP of a pattern deformed from a certain circular shape. For example, as shown in FIG. 63 (A), the spot SP deformed in the vertical direction can also be recognized. Further, as shown in FIG. 63 (B), the spot SP in which a part of the circular shape is missing and deformed can also be recognized.
- the feature quantities of the peripheral region SR1 that can be recognized by the learning model 91 include blue and green in addition to red, which is the color of the measured light.
- the feature quantities of the peripheral region SR1 that can be recognized by the learning model 91 include the brightness, brightness, saturation, and hue of the measured light.
- the brightness, brightness, saturation, and hue of the measured light can be acquired by performing brightness conversion processing or brightness, saturation, and hue conversion processing on the peripheral area of the spot SP included in the captured image. preferable.
- the signal processing unit 45 of the extended processor device 17 determines the position of the spot SP in the captured image in order to recognize the position of the spot SP, calculate the observation distance to the subject, and set the virtual scale.
- a position specifying unit 92 that identifies and calculates the observation distance
- an image processing unit 93 that sets various virtual scales based on the observation distance and generates a length measurement image obtained by processing the captured image using various virtual scales. And have.
- the position specifying unit 92 has a distance calculation unit 94.
- the position specifying unit 92 identifies the position of the spot SP formed on the subject by the measured light based on the captured image in which the subject is illuminated by the illumination light and the measured light.
- the distance calculation unit 94 obtains the observation distance from the position of the spot SP.
- the image processing unit 93 includes an image selection unit 95, a scale table 62, an offset setting unit 97, an offset distance calculation unit 98, and an offset virtual scale generation unit 99.
- the image selection unit 95 selects an image to be processed based on the position of the spot SP.
- the offset setting unit 97 sets an offset amount according to the height of the convex spot SP with respect to the observation distance.
- the offset distance calculation unit 98 calculates the offset distance by adding the offset amount to the observation distance.
- the offset virtual scale generation unit 99 generates an offset virtual scale based on the offset distance.
- the convex shape of the subject means the shape of the subject protruding from the surroundings. Therefore, even a part of the shape may be a shape protruding from the surroundings, and other shapes such as size, width of shape, height and / or number of protruding parts, continuity of height, etc. are not limited. ..
- the height of the spot SP of the polyp 100 is the vertical distance from the spot SP of the polyp 100 to the flat portion 100b of the polyp 100. More specifically, as shown in FIG. 66, the spot SP1 is formed on the top 100a of the polyp 100. Therefore, assuming that the surface parallel to the extension surface 101 and the top 100a of the polyp 100 passing through the top 100a of the polyp 100 is the parallel surface 102, the distance between the parallel surface 102 and the extension surface 101 is the spot of the polyp 100.
- the height HT1 of the SP is the vertical distance from the top 100a of the polyp 100 to the flat portion 100b.
- the polyp 100 and the height HT1 are schematically shown, and if the portion protrudes from the surrounding portion, the type, shape, size, etc. of the convex shape can be determined. It doesn't matter.
- the spot SP2 is formed in a region other than the top 100a of the polyp 100. That is, it is formed between the top 100a of the polyp 100 and the end of the polyp 100. Therefore, assuming that the surface parallel to the extension surface 101 and passing through the spot SP2 of the polyp 100 is the parallel surface 103, the distance between the parallel surface 103 and the extension surface 101 is the height HT2 of the spot SP2 of the polyp 100. .. Therefore, the height HT2 of the spot SP2 of the polyp 100 is the vertical distance from the spot SP2 of the polyp 100 to the flat portion 100b.
- the spot SP1 is formed on the top 100a of the polyp 100 by the measurement light Lm.
- the observation distance obtained from the spot SP1 is the distance D5 between the position P1 of the tip portion 12d of the endoscope and the position P2 of the spot SP1.
- the virtual scale corresponding to the distance D5 is a virtual scale that matches the actual measurement on the parallel plane 102. Therefore, when the spot SP1 is formed on the top 100a of the polyp 100, when the virtual scale corresponding to the distance D5 is generated and displayed, the virtual scale matching the actual measurement of the subject on the parallel surface 102 is displayed. Therefore, the virtual scale shifted to the larger scale or the like with respect to the measured value of the subject on the extension surface 101 is displayed.
- the offset setting unit 97 sets the height HT1 of the spot SP of the polyp 100 as the offset amount with respect to the observation distance D5.
- the offset virtual scale generation unit 99 generates a virtual scale based on the observation distance D6 as an offset virtual scale. More specifically, the offset virtual scale generation unit 99 refers to the scale table 62, uses a virtual scale when the observation distance is the distance D6, and uses this as the offset virtual scale.
- the virtual scale for offset indicates the actual distance or size of the subject on the extension surface 101.
- the image processing unit 93 performs processing for superimposing the generated virtual scale for offset on the captured image to generate a length measurement image.
- the virtual scale for offset is preferably superimposed so as to be displayed at the position where the spot SP is formed for more accurate measurement. Therefore, when displaying at a position far from the spot SP, it is displayed as close to the spot SP as possible.
- the length measurement image on which the virtual scale for offset is superimposed is displayed on the extended display 18 by the display control unit 46.
- the width W11 of the line constituting the virtual scale M11 is set to 1 pixel, which is the smallest setting value.
- the width W12 of the line constituting the virtual scale M12 is set to 2 pixels.
- the width W13 of the line constituting the virtual scale M13 is set to 3 pixels which is an intermediate value of the setting.
- the width W14 of the line constituting the virtual scale M14 is set to 4 pixels.
- the width W15 of the line constituting the virtual scale M15 is set to 5 pixels, which is the largest setting value.
- the lines constituting the virtual scales M11 to M15 are changed according to the observation distance, it is easy for the user doctor to measure the accurate dimensions of the subject. Further, since the widths W11 to W15 of the lines constituting the virtual scales M11 to M15 are set to values inversely proportional to the observation distance, it is possible to recognize the magnitude of the dimensional error from the width of the lines. .. For example, considering the recognized error, if the tumor tm is inside the line constituting the virtual scales M11 to M15, it is definitely smaller than the set actual size (5 mm in the example shown in FIG. 68). Within).
- a concentric virtual scale M2 composed of three concentric circles having different sizes may be set based on the position of one spot SP.
- the three concentric circles M21, M22, and M23 constituting the virtual scale M2 represent, for example, the actual size of "5 mm", “10 mm", and "20 mm”.
- the lines constituting the circular shape are cross-hatched for convenience of illustration, but in reality, one line is filled with one color. There is.
- the width W22 of the concentric circle M22 located one outside of the concentric circle M21 located inside is larger than the width W21 of the concentric circle M21, and the width W23 of the concentric circle M23 located outside is set larger than the width W22 of the concentric circle M22. do.
- the width W23 is set to a value ⁇ 2 times the width W22 and twice the width W21.
- a gradation may be added to the lines constituting the virtual scale M3 so that the density gradually decreases from the center in the width direction of the lines to the outside. Then, as in the first and second embodiments, the width of the line with the gradation is changed according to the observation distance.
- the line constituting the virtual scales M41 to M43 is set as a broken line, and the gap constituting the broken line is set to a value inversely proportional to the observation distance.
- FIG. 71 shows circular virtual scales M41, M42, and M43 when images are taken at each point of the far end Px, the near center Py, and the near end Pz in the observation distance range R1.
- the gap G1 of the broken line constituting the virtual scale M41 is set to the smallest set value.
- the virtual scale M41 is configured from the broken line, but the gap G1 may be configured from 0, that is, the virtual scale M41 may be configured from the solid line only in the case of the far end Px.
- the gap G2 of the broken line constituting the virtual scale M42 is set as the intermediate value of the setting.
- the gap G3 of the broken line constituting the virtual scale M43 is set to the largest set value.
- the gaps G1 to G3 of the broken lines constituting the virtual scales M41 to M43 are set to values that are inversely proportional to the observation distance, it is possible to recognize the magnitude of the dimensional error from the gaps of the broken lines. It will be possible.
- the lines that make up the virtual scale are made up of the same number regardless of the observation distance. As shown in FIG. 72, the number of lines constituting the virtual scales M51 to M53 is changed according to the observation distance. Note that FIG. 72 shows virtual scales M51, M52, and M53 when images are taken at each point of the far end Px, the near center Py, and the near end Pz in the observation distance range Rx, respectively.
- the virtual scale M51 in the case of the far-end Px in the range Rx, the virtual scale M51 is composed of three lines, that is, three concentric circles of different sizes. The three concentric circles represent, for example, the actual size of "5 mm", “10 mm", and "20 mm”.
- the virtual scale M52 in the case of Py near the center in the range Rx, is composed of two lines, that is, two concentric circles of different sizes. The two concentric circles represent, for example, the actual size of "5 mm” and "10 mm”.
- FIG. 72 (A) in the case of the far-end Px in the range Rx, the virtual scale M51 is composed of three lines, that is, three concentric circles of different sizes. The three concentric circles represent, for example, the actual size of "5 mm", "10 mm", and "20 mm”.
- the virtual scale M53 in the case of the near-end Pz in the range Rx, is composed of one line, that is, one circular shape.
- One circular shape represents an actual size of, for example, "5 mm”.
- the signal processing unit 45 of the expansion processor device 17 includes a position specifying unit 92 including a distance calculation unit 94 and an image processing unit 104.
- the image processing unit 104 includes an image selection unit 95, a scale table 62, a virtual scale setting unit 105, a virtual scale switching reception unit 106, and a length measurement image creation unit 107.
- the virtual scale setting unit 105 sets a virtual scale that represents the actual size of the observation target on the subject according to the position of the spot SP and has a scale with the end as a base point.
- the virtual scale switching reception unit 106 receives an instruction to switch and set a plurality of virtual scales.
- the length measurement image creation unit 107 creates a length measurement image in which the virtual scale set by the virtual scale setting unit 105 is superimposed on the captured image so that the position of the spot SP and the base point of the scale of the virtual scale overlap.
- the captured image 109 in which the subject including the polyp 108 is illuminated is input to the signal processing unit 45.
- the captured image 109 includes the polyp 108, the spot SP, and optionally the shadow 110.
- the position specifying unit 92 specifies the position of the spot SP based on the captured image 109 input to the signal processing unit 45.
- the virtual scale setting unit 105 sets a virtual scale that represents the actual size of the observation target corresponding to the position of the spot SP and has a scale with the end as a base point.
- the end portion is a portion closer to the outer portion than the central portion, or a start point, an end point, or the like in the shape of the virtual scale.
- the length-measuring image creation unit 107 places the virtual scale 111 set by the virtual scale setting unit 105 on the captured image 109 so that the position of the spot SP and the base point of the scale of the virtual scale 111 overlap.
- the captured image 109 superimposed on the image 109 is created.
- the virtual scale 111 is preferably superimposed so as to be displayed at the position of the spot SP for more accurate measurement. Therefore, even when displaying at a position far from the spot SP, it is preferable to display as close to the spot SP as possible.
- the virtual scale 111 is a line segment of a straight line, and has a scale at the start point and the end point of the line segment, which is a line segment perpendicular to the line segment of the straight line.
- the start point and / or the end point itself may be used as a scale. It does not have to be.
- the virtual scale 111 may have a number "10" in the vicinity of the base end of the scale. This number "10" is a scale label 111a of the virtual scale 111, and is attached so that it can be easily recognized that the line segment of the virtual scale 111 is the actual size of 10 mm.
- the numbers possessed by the virtual scale have the same meaning.
- the numerical value of the scale label 111a can be changed by setting, and may be a virtual scale 111 that does not display the scale label 111a itself.
- Various types of virtual scales are used depending on the settings. For example, a straight line segment or a combination of straight line segments, a combination of circles or circles in shape, a combination of straight line segments and circles, and the like are used.
- the captured image 113 includes a virtual scale 112 whose shape is a combination of straight line segments.
- the virtual scale 112 has a shape in which straight line segments are combined in an L-shape, and the line segments extend in the upward direction on the paper surface and in the paper surface direction with the corners of the L-shape as the base point, and the end points are each starting from the base point.
- the virtual scale 112 has a numerical value of "10", which is the scale label 112a, in the vicinity of the base end of the scale, similarly to the virtual scale 111.
- the captured image 114 includes a virtual scale 115 whose shape is a combination of a straight line segment and a circle.
- the virtual scale 115 has a shape in which a circle and a line segment having the diameter of the circle are combined, and the intersection of the line segment and the circle is used as a scale.
- a scale 116 may be provided at the center of a point or circle that halves the line segment.
- the virtual scale 115 has the number "10" which is the scale label 116a in the vicinity of the base point of the scale, similarly to the virtual scale 111 or the virtual scale 112.
- the scale label 116b represents half of the scale label 116a.
- the virtual scale includes, for example, a virtual scale 117 (FIG. 78 (A)) including a scale label 117a in which a line segment extends from the base point to the left of the paper surface, and downward from the base point to the paper surface.
- a virtual scale 118 (FIG. 78 (B)) containing a scale label 118a, or a virtual scale 119 (FIG. 78 (C)) containing a scale label 119a, in which a line segment extends diagonally upward to the right of the paper surface from the base point. )
- Etc. can be taken in various shapes.
- the signal processing unit 45 of the expansion processor device 17 includes a position specifying unit 92, a reference scale setting unit 120, a measured value scale generation unit 121, and a length measurement image generation unit 122.
- the reference scale setting unit 120 sets a reference scale indicating the actual size of the subject based on the position of the spot SP.
- the measured value scale generation unit 121 generates a measured value scale indicating the measured value measured at the measured portion of the region of interest based on the set reference scale. Since the reference scale and the measured value scale are virtual ones to be displayed on the captured image, they correspond to the virtual scales.
- the area of interest is an area that the user included in the subject should pay attention to.
- the area of interest is, for example, a polyp or the like, which is likely to require measurement.
- the measurement portion is a portion for measuring the length or the like in the region of interest. For example, when the region of interest is a reddish portion, the measurement portion is the longest portion of the reddish portion or the like, and when the region of interest is circular, the measurement portion is a diameter portion or the like of the region of interest.
- the length measurement image generation unit 122 creates a length measurement image in which the measurement value scale is superimposed on the captured image.
- the measured value scale is superimposed on the captured image in a state of being matched to the measured portion of the region of interest.
- the length measurement image is displayed on the extended display 18.
- the reference scale setting unit 120 includes a reference scale table 121a.
- the reference scale table 121a is correspondence information in which the position of the spot SP and the measurement information corresponding to the actual size of the subject are associated with each other.
- the captured image 114 in which the subject including the polyp 123 to be observed is captured is input to the signal processing unit 45.
- the polyp 123 has, for example, a three-dimensional shape in which spheres are overlapped.
- a spot SP is formed at the end on the polyp 123.
- the position specifying unit 92 specifies the position of the spot SP.
- the reference scale setting unit 120 sets the reference scale 131 indicating the actual size of the subject corresponding to the position of the specified spot SP with reference to the reference scale table 121a.
- the reference scale 131 is, for example, a line segment having a number of pixels corresponding to 20 mm in the actual size, and a numerical value and a unit indicating the actual size.
- the reference scale 131 is not normally displayed on the extended display 18, but when the reference scale 131 is displayed on the extended display 18, it is displayed as in the captured image 124.
- the measured value scale generation unit 121 includes a region of interest extraction unit 125, a measurement unit determination unit 126, a measurement content reception unit 127, and a measurement value calculation unit 128.
- the attention region extraction unit 125 extracts the hatched region as the attention region 129 as in the captured image 124.
- the measurement portion determination unit 126 measures, for example, the preset reference for measuring the portion of the region of interest in the horizontal direction with the spot SP as the base point
- the captured image 124 As described above, the horizontal edge position 130 is extracted with the spot SP as a base point.
- the measurement portion is between the spot SP and the horizontal edge position 130.
- the measured value calculation unit 128 sets the actual size of the reference scale to L0, sets the number of pixels of the reference scale 131 on the captured image 124 to Aa, and superimposes the reference scale 131 on the region of interest 129 in the captured image 124.
- the measured value calculation unit 128 is located in the measurement portion between the number of pixels Aa corresponding to the reference scale 131 shown in the captured image 124a and the spot SP and the horizontal edge position 130 shown in the captured image 124b.
- the actual size of the measured value scale 132 is set as in the captured image 124d. Calculated as 13 mm.
- the length measurement image generation unit 122 generates a length measurement image 133 in which the measurement value scale 132 is superimposed on the captured image 124.
- the measured value scale 132 is superimposed on the captured image 124 by a figure such as an arrow which is the shape of a straight line segment.
- the length measurement image 133 may include a numerical value of the actual size of the measurement value scale 132.
- the numerical value of the actual size of the measured value scale 132 may be superimposed on the captured image 124 in a state separated from the figure such as an arrow.
- the type of measured value scale 132 can be selected from multiple types.
- the measurement content reception unit 127 receives the setting of the content of the measurement value scale and sends the content to the measurement value scale generation unit 121, and the measurement value scale generation unit 121 uses the measurement value scale 132 generated based on the content.
- the length measurement image generation unit 122 generates the length measurement image 133.
- the attention region extraction unit 125 extracts the attention region using a trained model learned from the captured images acquired in the past.
- a trained model various models suitable for image recognition by machine learning can be used.
- a model using a neural network can be preferably used for the purpose of recognizing a region of interest on an image.
- training is performed using a captured image having information on the region of interest as teacher data.
- Information on the region of interest includes the presence or absence of the region of interest, the position or range of the region of interest, and the like.
- learning may be performed using a captured image that does not have information on the region of interest.
- the measurement portion determination unit 126 also determines the measurement portion using the trained model learned from the captured images acquired in the past.
- the model or the like used for the trained model is the same as that of the region of interest extraction unit, but when training these models, training is performed using a captured image having information on the measurement portion. Information on the measured portion includes a measured value and the measured portion. Depending on the model, learning may be performed using a captured image that does not have information on the measurement portion.
- the trained model used by the region of interest extraction unit 125 and the trained model used by the measurement unit determination unit 126 may be common. For the purpose of extracting the measurement portion, one trained model may be used to extract the measurement portion without extracting the region of interest from the captured image 124.
- the second signal processing unit 60 is for scaling to display a virtual scale deformed according to the position of the spot SP from the representative point data table 66 that stores the irradiation position of the measurement light and the representative point of the virtual scale.
- the table 62 has been updated (see FIGS. 39 and 40)
- the scale table 62 may be created by other methods. For example, as shown in FIG. 87, a distorted grid region QN that encloses a circular virtual scale centered on the spot SP is acquired from an image obtained when a chart on a square grid is imaged. In the distorted grid region QN, the grid is distorted due to the distortion of the imaging optical system 21 as the distance from the center of the screen increases.
- the distorted grid region QN is transformed into a square grid region SQ as shown in FIG. 88 by an affine transformation matrix.
- the coordinates of the points indicating the circular virtual scale are calculated.
- the coordinates of the points of the virtual scale in the square lattice region SQ are converted into a distorted circular virtual scale distorted by the imaging optical system 21 by the inverse matrix of the affine transformation matrix.
- the coordinates of this distorted circular virtual scale and the position of the spot SP are associated and stored in the scale table 62.
- the display mode of the virtual scale may be changed between the region where the measurement by the virtual scale is effective and the region other than that.
- the spot SP measures.
- the measurement of the tumor tm by the virtual scale is not effective, so that the cross-shaped virtual scales MN and MF are displayed, respectively.
- the spot SP exists in the region where the measurement by the circular virtual scale M is effective, the circular virtual scale M is displayed.
- the line type of the virtual scale may be changed depending on whether the spot SP is within or outside the measurement effective area.
- the spot SP is outside the range of the measurement effective region (near end Px side), and as shown in FIG. 90 (c), the spot SP is outside the range of the measurement effective region.
- the circular virtual scales MpN and MpF are displayed by dotted lines, respectively.
- FIG. 90 (a) when the spot SP is outside the range of the measurement effective region (near end Px side), and as shown in FIG. 90 (c), the spot SP is outside the range of the measurement effective region.
- the circular virtual scales MpN and MpF are displayed by dotted lines, respectively.
- the circular virtual scale Mp when the spot SP exists in the region where the measurement by the circular virtual scale M is effective, the circular virtual scale Mp is displayed by a solid line.
- the spot SP changes the line type of the virtual scale between the outside of the measurement effective area and the inside of the measurement effective area with a dotted line and a solid line, other colors may be used separately.
- the line type of the virtual scale is blue, and when it is within the range, the line type of the virtual scale is white.
- the system control unit 41 controls the light source device 13 to emit the illumination light and the measurement light when the still image acquisition instruction is not given.
- the illumination light is turned on at the first timing including the still image acquisition instruction.
- the measurement light is turned off.
- the second timing and the third timing after the first timing has elapsed, the measurement light is turned on again while the lighting of the illumination light is maintained.
- the second timing and the third timing are the same timing, but may be different timings.
- a second captured image obtained by imaging a subject illuminated by the measurement light is obtained.
- the first captured image obtained by imaging the subject illuminated by the illumination light and the measurement light is obtained.
- the system control unit 41 saves the still image of the first captured image and the still image of the second captured image as the saved images to be saved in the still image storage unit 42.
- the signal processing unit 45 of the expansion processor device 17 acquires a still image of the third captured image displaying the virtual scale Mxm set according to the position of the spot SP with respect to the first captured image.
- the still image of the third captured image is sent to the processor device 14 and stored in the still image storage unit 42. Further, as shown in FIG.
- the display control unit 46 sets the second captured image and the third captured image in order to notify that the still image is recorded for a certain period of time after the still image is performed. Is displayed on the extended display 18. It is preferable that at least two of the first captured image, the second captured image, and the third captured image are stored in the still image storage unit 42 by one instruction for acquiring a still image. For example, it is preferable to store two images, a second captured image and a third captured image. Further, as described above, the third captured image corresponds to the storage image to be stored in the still image storage unit 42 among the length measurement images on which the virtual scale is superimposed and displayed.
- the second timing or the third timing may be before the first timing.
- the temporary storage storage unit (not shown) of the processor device 14.
- the first captured image saved in the temporary storage storage unit is saved in the still image storage unit 42 as the first captured image of the second timing, and the temporary storage storage unit is also used.
- a third captured image in which a virtual scale is added to the first captured image saved in is stored in the still image storage unit 42.
- the second timing and the third timing may be different timings.
- the first captured image obtained at the second timing is stored in the still image storage unit 42 in the same manner as described above.
- the first captured image obtained at the third timing is stored in the still image storage unit 42 after adding a virtual scale to make it a third captured image before storing it in the still image storage unit 42.
- the lesion recognition unit 135, the diagnostic information acquisition unit 136, and the learning unit 137 may be provided in the signal processing unit 45 of the extended processor device 17, in addition to the first signal processing unit 59 and the second signal processing unit 60.
- the lesion recognition unit 135 performs image processing on the first captured image (image based on the illumination light and the measurement light) and performs the recognition processing.
- a detection process for detecting a region of interest such as a lesion area is performed from the first captured image. It is preferable to use a machine-learned learning model for the recognition process. That is, the detection result of the region of interest is output from the learning model in response to the input of the first captured image to the learning model.
- the learning model is preferably a machine-learned learning model such as a convolutional neural network (CNN).
- the recognition process performed by the lesion recognition unit 135 may be a discrimination process for discriminating the degree of progression of the lesion from the lesion portion recognized from the first captured image. Further, the lesion recognition unit 135 may perform image processing on the second captured image (image based only on the illumination light) to perform recognition processing.
- the diagnostic information acquisition unit 136 acquires diagnostic information regarding the first captured image or the second captured image from the diagnostic information management device 138.
- diagnostic information the medical record of the patient to be examined is acquired.
- the medical record is information that records the progress of medical treatment or examination for a patient, and includes, for example, a record of the patient's name, sex and age, disease name, major symptoms, prescription or treatment content, or medical history.
- the information on the lesion portion recognized and processed by the lesion recognition unit 135 and the diagnostic information on the first captured image or the second captured image acquired by the diagnostic information acquisition unit 136 are stored in the still image storage unit 42 as attached data of the dataset DS. It is saved in association with the first captured image or the second captured image.
- the learning unit 137 performs machine learning using the first captured image or the second captured image stored in the still image storage unit 42 and the attached data (data set) associated with the first and second captured images. .. Specifically, the learning unit 137 machine-learns the learning model of the lesion recognition unit 135.
- the second captured image is preferable as a candidate for teacher data for machine learning. Since the second captured image is an image obtained by an instruction to acquire a still image at the time of measuring the tumor tm or the like, it is an image with a high possibility of including a region of interest to be observed. Further, since the second captured image is a normal endoscopic image that is not irradiated with the measurement light, it is highly useful as teacher data for machine learning.
- the attached data includes information on the lesion and diagnostic information
- the user does not have to input it when performing machine learning.
- the first captured image may be used as it is, but it is more preferable to use a portion other than the irradiation region of the measurement light as a teacher data candidate.
- the calibration method for creating the representative point data table 66 using the calibration device 200 shown in FIG. 97 will be described below.
- the calibration device 200 includes a calibration display 201, a moving mechanism 202, a calibration display control unit 204, a caliber image acquisition unit 206, and a calibration unit 208.
- the calibration display control unit 204, the calib image acquisition unit 206, and the calibration unit 208 are provided in the calib image processing device 210.
- the caliber image processing device 210 is electrically connected to the processor device 14, the calibration display 201, and the moving mechanism 202.
- the moving mechanism 202 has a holding portion (not shown) that holds the tip portion 12d of the endoscope 12 toward the calibration display 201, and by moving the holding portion at a specific interval, the endoscope The distance Z between the tip portion 12d of 10 and the calibration display 201 is changed.
- the calibration display control unit 204 changes the distance Z by the moving mechanism 202, the calibration display 201 has a first display that is not affected by the image pickup optical system 21 with respect to the irradiation position of the measurement light. Display an image of the virtual scale of the embodiment. Since the image of the measurement marker in the first display mode does not consider the influence of distortion or the like due to the imaging optical system 21, it is displayed in a size or shape corresponding to the scale display position when displayed on the extended display 18. Not done.
- the caliber image acquisition unit 206 acquires a caliber image obtained by imaging the measurement marker of the first display mode displayed on the calibration display 201 by the endoscope 12.
- the caliber image is acquired by the endoscope 12 taking an image every time the distance Z is changed, that is, every time the virtual scale of the first display mode is displayed. For example, when the virtual scale of the first display mode is displayed n times, n calib images are obtained.
- the caliber image includes an image of a measurement marker in a second display mode influenced by the imaging optical system 21 with respect to the irradiation position of the measurement light.
- the image of the measurement marker in the second display mode is displayed in a size or shape corresponding to the scale display position because the influence of distortion or the like due to the imaging optical system 21 is taken into consideration.
- the calibration unit 208 calibrates the display on the extended display 18 of the virtual scale based on the caliber image acquired by the caliber image acquisition unit 206. Specifically, the calibration unit 208 acquires representative point extraction processing for extracting representative points from the virtual scale image of the second display mode included in the caliber image, representative point data related to the representative points, and the caliber image.
- the created representative point data table that performs the table creation process for creating the representative point data table in association with the irradiation position at the same timing is sent to the expansion processor device 17 and stored in the representative point data table 66.
- the inspection system 300 is used for scale accuracy inspection such as whether the virtual scale has a predetermined shape.
- the inspection system 300 includes a test chart 302, a display 15, and a moving mechanism unit 304.
- the display 15 is shared with the endoscope system 10, but a display for accuracy inspection may be provided separately.
- the test chart 302 has a chart main body 305, and the chart main body 305 has an inspection area portion 306 having an inspection area having a specific shape, and a reference for aligning an irradiation position of measurement light at the time of accuracy inspection.
- the inspection reference position 308 is provided.
- the inspection area portion 306 includes three circular inspection areas 306a, 306b, and 306c as inspection areas having a specific shape. These three inspection regions 306a, 306b, and 306c are provided concentrically with the inspection reference position 308 as the center.
- the inspection areas 306a, 306b, and 306c are a 5 mm virtual scale (indicating that the diameter is "5 mm"), a 10 mm virtual scale (indicating that the diameter is “10 mm”), and a 20 mm virtual scale (indicating that the diameter is “10 mm”), respectively. Is "20 mm”).
- the inspection image is acquired by illuminating the chart body 205 with the measured light (for example, the spot SP) used for the confirmation inspection and taking an image. ..
- the inspection image is displayed on the display 15.
- a virtual scale M corresponding to the irradiation position (position of the spot SP) of the measurement light is displayed.
- the moving mechanism unit 304 moves the test chart 302 so that the irradiation position (position of the spot SP) of the measurement light matches the inspection reference position.
- the user determines whether or not the virtual scale M is properly displayed.
- the user can properly display the virtual scale M.
- the virtual scale M when even a part of the virtual scale M does not enter the inspection area 106a, such as when the virtual scale M partially protrudes from the inspection area 106a, the user is 5 mm. It is determined that the virtual scale M of is not displayed properly.
- the scale table 62 may be created as follows.
- the relationship between the position of the spot and the size of the virtual scale can be obtained by imaging a chart in which a pattern of actual size is regularly formed. For example, a grid-like chart with spot-shaped measurement light emitted toward the chart and a ruled line (5 mm) equal to or finer than the actual size (for example, 1 mm) while changing the position of the spot by changing the observation distance. Is imaged, and the relationship between the position of the spot (pixel coordinates on the image pickup surface of the image sensor 32) and the number of pixels corresponding to the actual size (how many pixels the actual size of 5 mm is represented by) is acquired.
- (x1, y1) is a pixel position in the X and Y directions of the spot SP4 on the image pickup surface of the image pickup element 32 (the upper left is the origin of the coordinate system).
- Lx1 be the number of pixels in the X direction corresponding to the actual size of 5 mm at the position (x1, y1) of the spot SP4, and let Ly1 be the number of pixels in the Y direction.
- FIG. 103 shows a state in which the same chart with 5 mm ruled lines as in FIG. 102 is captured, but the shooting distance is closer to the far end than the state shown in FIG. 102, and the intervals between the ruled lines are narrower. In the state of FIG.
- the number of pixels in the X direction corresponding to the actual size of 5 mm at the position (x2, y2) of the spot SP5 on the image pickup surface of the image sensor 32 is Lx2, and the number of pixels in the Y direction is Ly2. Then, while changing the observation distance, the measurements as shown in FIGS. 102 and 103 are repeated, and the results are plotted. In FIGS. 102 and 103, the distortion of the imaging optical system 21 is not taken into consideration.
- FIG. 104 shows the relationship between the X coordinate of the spot position and Lx (the number of pixels in the X direction of the first measurement marker), and FIG. 105 shows the relationship between the Y coordinate of the spot position and Lx.
- g1 and g2 can be obtained from the above plot results, for example, by the method of least squares.
- the X coordinate and the Y coordinate of the spot have a one-to-one correspondence, and basically the same result (the same number of pixels for the same spot position) can be obtained regardless of which of the functions g1 and g2 is used. Therefore, when calculating the size of the first virtual scale, either function may be used, and the function of g1 or g2, which has the higher sensitivity of the pixel number change with respect to the position change, may be selected. Further, when the values of g1 and g2 are significantly different, it may be determined that "the position of the spot could not be recognized".
- FIG. 106 shows the relationship between the X coordinate of the spot position and Ly (the number of pixels in the Y direction), and FIG. 107 shows the relationship between the Y coordinate of the spot position and Ly.
- any of the functions h1 and h2 may be used as in the case of Lx.
- the functions g1, g2, h1 and h2 obtained as described above are stored in the scale table 62 in the lookup table format.
- the functions g1 and g2 may be stored in the scale table 62 in a function format.
- the striped pattern light ZPL formed as the light of the striped pattern on the subject when the subject is irradiated may be used (for example, Japanese Patent Application Laid-Open No. See Japanese Publication No. 2016-198304).
- the striped pattern light ZPL is obtained by irradiating a liquid crystal shutter with variable transmittance (not shown) with a specific laser light, and a region (transmitted region) through which the specific laser light is transmitted by the liquid crystal shutter and a specific laser light. Is formed from two different patterns of vertical stripes that do not transmit (non-transparent area) and repeat periodically in the horizontal direction.
- the cycle of the striped pattern light changes depending on the distance from the subject. Therefore, the cycle or phase of the striped pattern light is shifted by the liquid crystal shutter and irradiated multiple times.
- the three-dimensional shape of the subject is measured based on a plurality of images obtained by shifting the period or phase.
- the subject is alternately irradiated with the phase X striped pattern light, the phase Y striped pattern light, and the phase Z striped pattern light.
- the striped pattern light having phases X, Y, and Z is phase-shifted by 120 ° (2 ⁇ / 3) from the vertical striped pattern.
- the three-dimensional shape of the subject is measured using three types of images obtained based on each striped pattern light.
- the phase X striped pattern light, the phase Y striped pattern light, and the phase Z striped pattern light are switched in units of one frame (or several frames). It is preferable to illuminate the subject. It is preferable that the illumination light always irradiates the subject.
- the measurement light LPL having a grid pattern formed as a grid pattern when the subject is irradiated may be used (for example, JP-A-2017-217215). See Gazette).
- the measurement light LPL of the grid pattern is not a perfect grid, but is slightly deformed from the grid, such as by making it wavy so as to improve the detection accuracy of the grid pattern.
- the grid pattern is provided with an S code indicating that the end points of the left and right horizontal lines are continuous.
- the grid pattern may be a pattern in which vertical lines and horizontal lines are regularly arranged, or a pattern in which a plurality of spots are arranged in a vertical and horizontal grid pattern.
- the subject When the measurement light LPL having a grid pattern is used as the measurement light, the subject may be constantly irradiated with the illumination light and the measurement light LPL having a grid pattern during the length measurement mode, and as shown in FIG. 111. While the illumination light constantly illuminates the subject, the grid pattern measurement light LPL repeats turning on and off (or dimming) every frame (or every few frames) to measure the grid pattern. The subject may be irradiated with LPL intermittently. In this case, in the frame in which the measurement light LPL of the grid pattern is turned on, the three-dimensional shape is measured based on the measurement light LPL of the grid pattern. Then, it is preferable to superimpose and display the measurement result of the three-dimensional shape on the image obtained in the frame that irradiates only the illumination light.
- a three-dimensional plane light TPL represented by a mesh line on the subject image may be used (see, for example, Japanese Patent Publication No. 2017-508529).
- the tip portion 12d is moved so that the three-dimensional plane light TPL matches the measurement target.
- the distance of the intersection curve CC between the three-dimensional parallel light TPL and the subject is calculated by a process based on a manual operation such as a user interface or an automatic process.
- the subject When the three-dimensional plane light TPL is used as the measurement light, the subject may be constantly irradiated with the illumination light and the three-dimensional plane light TPL during the length measurement mode, and as shown in FIG. 113, the illumination light is While constantly illuminating the subject, the 3D plane light TPL intermittently illuminates the subject by repeating turning on and off (or dimming) every frame (or every few frames). You may.
- the receiving unit 38, the signal processing unit 39, the display control unit 40, the system control unit 41, the still image storage unit 42, the data transmission / reception unit 43, the data transmission / reception unit 44, the signal processing unit 45, and the display control unit 46 A processing unit that executes various processes such as including various control units or processing units provided in these control units (for example, length measurement mode control unit 50, first signal processing unit 59, etc.).
- the hardware structure is various processors as shown below.
- the circuit configuration is changed after manufacturing the CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), etc., which are general-purpose processors that execute software (programs) and function as various processing units. It includes a programmable logic device (PLD), which is a possible processor, a dedicated electric circuit, which is a processor having a circuit configuration specially designed for executing various processes, and the like.
- PLD programmable logic device
- One processing unit may be composed of one of these various processors, or may be composed of a combination of two or more processors of the same type or different types (for example, a plurality of FPGAs or a combination of a CPU and an FPGA). May be done. Further, a plurality of processing units may be configured by one processor. As an example of configuring a plurality of processing units with one processor, first, as represented by a computer such as a client or a server, one processor is configured by a combination of one or more CPUs and software. There is a form in which this processor functions as a plurality of processing units.
- SoC System On Chip
- the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
- the hardware-like structure of these various processors is, more specifically, an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined.
- the hardware structure of the storage unit is a storage device such as an HDD (hard disk drive) or SSD (solid state drive).
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Optics & Photonics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Signal Processing (AREA)
- Astronomy & Astrophysics (AREA)
- General Physics & Mathematics (AREA)
- Endoscopes (AREA)
- Instruments For Viewing The Inside Of Hollow Bodies (AREA)
Abstract
Description
式)第1直線距離=
((xp2-xp1)2+(yp2-yp1)2+(zp2-zp1)2)0.5
算出された第1直線距離は、撮像画像上に、計測情報71(図44では「20mm」)として表示される。なお、特定スポットSPkは拡張ディスプレイ18に表示してもよく、表示しなくてもよい。
式A)X=r×cosα×cosβ
式B)Y=r×cosα×sinβ
式C)Z=r×sinα
式OS)D6=D5+HT1
式(K1)L1=L0×Ba/Aa
12 内視鏡
12a 挿入部
12b 操作部
12c 湾曲部
12d 先端部
12f 観察モード切替スイッチ
12g 静止画取得指示スイッチ
12h ズーム操作部
13 光源装置
14 プロセッサ装置
15 ディスプレイ
16 ユーザーインターフェース
17 拡張プロセッサ装置
18 拡張ディスプレイ
18a 付帯情報表示領域
18b 観察画像表示領域
19 バルーン
19a 先端部
19b 基端部
19c 膨出部
20a、20b リング
21 撮像光学系
21a 対物レンズ
21b ズームレンズ
21c 先端面
22 照明光学系
22a 照明レンズ
22b 先端面
23 計測光出射部
23a 光源
23b DOE
23c プリズム
24 開口
25 送気送水ノズル
25a 噴射筒部
25b 噴射口
26 腸管
27 先端キャップ
27a、27b、27c、27d 貫通孔
28 先端面
28a、28b 平面
30 光源部
31 光源用プロセッサ
32 撮像素子
33 撮像制御部
34 CDS/AGC回路
35 A/D変換器
36 通信I/F
37 通信I/F
38 受信部
39 信号処理部
40 表示制御部
41 システム制御部
42 静止画保存部
43 データ送受信部
44 データ送受信部
45 信号処理部
46 表示制御部
47 計測光出射部用収納部
48 透明蓋
49 プリズム
50 測長モード制御部
53 明るさ情報算出部
54 照明光光量レベル設定部
55 第1発光制御用テーブル
56 第2発光制御用テーブル
57 立ち上がり線
58 斜め線
59 第1信号処理部
60 第2信号処理部
61 照射位置検出部
62 スケール用テーブル
64 テーブル更新部
66 代表点データテーブル
67 交差ライン
68 目盛り
69 位置特定部
70 計測情報処理部
71 計測情報
72 位置特定部
73 画像加工部
74 ノイズ成分除去部
75 色情報変換部
76 二値化処理部
77 マスク画像生成部
78 除去部
79 計測光の色情報
80 ノイズ成分の色情報
81 ノイズ成分の領域
82 画像選択部
83 スケール用テーブル
84 第1信号処理部
85 第2信号処理部
86 マスク処理部
87 二値化処理部
88 ノイズ成分除去部
89 照射位置検出部
90 照射領域認識部
91 学習モデル
92 位置特定部
93 画像加工部
94 距離算出部
95 画像選択部
97 オフセット設定部
98 オフセット距離算出部
99 オフセット用仮想スケール生成部
100 ポリープ
100a 頂部
100b 平坦部
101 延長面
101X 実線
102、103 平行面
102X 点線
104 画像加工部
105 仮想スケール設定部
106 仮想スケール切替受付部
107 測長画像作成部
108 ポリープ
109 撮像画像
110 影
111、112 仮想スケール
111a 目盛りラベル
113、114 撮像画像
115 仮想スケール
116 目盛り
116a 目盛りラベル
116b 目盛りラベル
118 仮想スケール
118a 目盛りラベル
119 仮想スケール
119a 目盛りラベル
120 基準スケール設定部
121 計測値スケール生成部
121a 基準スケール用テーブル
122 測長画像生成部
123 ポリープ
124 撮像画像
125 注目領域抽出部
126 計測部分決定部
127 計測内容受付部
128 計測値算出部
129 注目領域
130 水平方向エッジ位置
131 基準スケール
132 計測値スケール
133 測長画像
135 病変認識部
136 診断情報取得部
137 学習部
138 診断情報管理装置
140 測長対応内視鏡可否判断部
141 計測光ON、OFF切替部
142 測長画像表示設定ON、OFF切替部
143 測長機能稼働状況表示ON、OFF切替部
144 仮想スケール表示切替制御部
146 スケール表示中アイコン
147 仮想スケール
147a、147b、147c 仮想スケール
148 スケール非表示中アイコン
149 切替前画像表示設定保存部
200 キャリブレーション装置
201 キャリブレーション用ディスプレイ
202 移動機構
204 キャリブレーション用表示制御部
206 キャリブ画像取得部
208 キャリブレーション部
210 キャリブ画像処理装置
300 検査システム
302 テストチャート
304 移動機構部
305 チャート本体
306 検査領域部
306a、306b、306c 検査領域
308 検査基準位置
Aa、Ba ピクセル数
Ax 光軸
BLC バルーン制御装置
BF 青色のカラーフィルタ
CL1 第1特徴線
CL2 第2特徴線
CC 交差曲線
CR1 白色中心領域
D1 第1方向
D2 第2方向
D3 第3方向
D5 距離
D6 オフセット距離
DS1 回折スポット
DT 間隔
EP 測定点
G1、G2、G3 隙間
GF 緑色のカラーフィルタ
HT1、HT2 高さ
LG ライトガイド
Ls、Lt ライン
Lm 計測光
Lx1、Lx2 X方向ピクセル数
Ly1、Ly2 Y方向ピクセル数
LPL 格子状パターンの計測光
M 円形の仮想スケール
M1、M2、M3 十字型の仮想スケール
M11、M12、M13、M14、M15 仮想スケール
M21、M22、M23 仮想スケール
M41、M42、M43 仮想スケール
M4A、M4B、M4C、M5A、M5B、M5C 円状のマーカ
M51、M52、M53 仮想スケール
M6A、M6B、M6C 歪曲同心円状の仮想スケール
MN、MF 十字の仮想スケール
MpN、Mp、MpF 円形の仮想スケール
MT 移動軌跡
Mx 目盛り
Mxm 仮想スケール
N1 第1ノイズ成分
N2 第2ノイズ成分
P ポリープ
P1、P2、P3 位置
Px 近端
Py 中央付近
Pz 遠端
RP、PRx 赤色画像
PRy 二値化赤色画像
GP、PGx 緑色画像
PGy 二値化緑色画像
BP、PBx 青色画像
PBy 二値化青色画像
PD1 第1差分画像
Qx、Qy、Qz 矢印
QN 歪曲格子領域
RP、RP* 代表点
RF 赤色のカラーフィルタ
SCC 特定交差曲線
SP スポット
SP1、SP2、SP3、SP4、SP5 スポット
SPk1 第1スポット
SPk2 第2スポット
SQ 正方格子領域
SR1 周辺領域
tm、tm1、tm2、tm3、tm4、tm5 腫瘍
TPL 3次元平面光
W11、W12、W13、W14、W15 幅
W21、W22、W23 幅
Wx 照射位置移動可能範囲
WMB 計測光の波長域
ZPL 縞状パターン光
Claims (10)
- 内視鏡と、
画像制御用プロセッサを有するプロセッサ装置を備え、
前記画像制御用プロセッサは、
内視鏡が前記プロセッサ装置に接続された場合に、前記内視鏡が測長対応内視鏡であるか否かを判断し、
前記内視鏡が前記測長対応内視鏡である場合に、測長モードへの切り替えを有効化する内視鏡システム。 - 前記内視鏡が前記測長対応内視鏡である場合には、内視鏡は計測光の照射が可能であり、前記計測光に基づく仮想スケールを表示する測長画像をディスプレイに表示することが可能であり、
前記画像制御用プロセッサは、
前記測長モードへの切り替えが有効化された状態において、前記測長モードへの切り替え操作によって、前記計測光のON又はOFFの切替、前記測長画像に関する測長画像表示設定のON又はOFFの切替、前記仮想スケールが前記ディスプレイで表示中であることを示す測長機能稼働状況表示のON又はOFFの切替、及び、前記仮想スケールの表示のON、OFF、又は表示態様変更の切替のうちの少なくともいずれかを行う請求項1記載の内視鏡システム。 - 前記画像制御用プロセッサは、
前記測長モードへの切り替え操作により、前記計測光をON、前記測長画像表示設定をON、前記測長機能稼働状況表示をON、及び、前記仮想スケールの表示をONに切り替える請求項2記載の内視鏡システム。 - 前記画像制御用プロセッサは、
前記測長モードへの切り替え操作において、モード切替時条件を満たしていない場合には、前記計測光をON、前記測長画像表示設定をON、前記測長機能稼働状況表示をON、及び、前記仮想スケールの表示をONに切り替えることを禁止する請求項3記載の内視鏡システム。 - 前記測長機能稼働状況表示をONに切り替えることを禁止する代わりに、前記仮想スケールが表示中でないことを示す測長機能稼働状況不可表示をONにする請求項4記載の内視鏡システム。
- 前記画像制御用プロセッサは、
前記測長画像表示設定をONにする場合には、前記測長モードへの切替前の画像表示設定を保存する請求項3記載の内視鏡システム。 - 前記仮想スケールの表示態様変更は、複数のスケールパターンの中からの選択によって行われる請求項3記載の内視鏡システム。
- 前記画像制御用プロセッサは、
前記測長モードから他のモードへの切り替え操作により、前記計測光をOFF、前記測長画像表示設定をOFF、前記測長機能稼働状況表示をOFF、及び、前記仮想スケールの表示をOFFに切り替える請求項2記載の内視鏡システム。 - 前記画像制御用プロセッサは、
前記測長画像表示設定をOFFにする場合には、前記測長モードへの切替前に保存した画像表示設定に切り替える請求項2又は8記載の内視鏡システム。 - 内視鏡、及び、画像制御用プロセッサを有するプロセッサ装置を備える内視鏡システムの作動方法において、
前記画像制御用プロセッサは、
内視鏡が前記プロセッサ装置に接続された場合に、前記内視鏡が測長対応内視鏡であるか否かを判断し、
前記内視鏡が前記測長対応内視鏡である場合に、測長モードへの切り替えを有効化する内視鏡システムの作動方法。
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202180053946.2A CN116171398A (zh) | 2020-09-02 | 2021-03-08 | 内窥镜系统及其工作方法 |
| JP2022546878A JP7569384B2 (ja) | 2020-09-02 | 2021-03-08 | 内視鏡システム及びその作動方法 |
| EP21863868.2A EP4209821B1 (en) | 2020-09-02 | 2021-03-08 | Endoscope system and method for operating same |
| US18/177,537 US20230200682A1 (en) | 2020-09-02 | 2023-03-02 | Endoscope system and method of operating the same |
| JP2024175066A JP2024177468A (ja) | 2020-09-02 | 2024-10-04 | 内視鏡システム及びその作動方法 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020147691 | 2020-09-02 | ||
| JP2020-147691 | 2020-09-02 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/177,537 Continuation US20230200682A1 (en) | 2020-09-02 | 2023-03-02 | Endoscope system and method of operating the same |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022049807A1 true WO2022049807A1 (ja) | 2022-03-10 |
Family
ID=80490833
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2021/008993 Ceased WO2022049807A1 (ja) | 2020-09-02 | 2021-03-08 | 内視鏡システム及びその作動方法 |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20230200682A1 (ja) |
| EP (1) | EP4209821B1 (ja) |
| JP (2) | JP7569384B2 (ja) |
| CN (1) | CN116171398A (ja) |
| WO (1) | WO2022049807A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115311239A (zh) * | 2022-08-15 | 2022-11-08 | 合肥中纳医学仪器有限公司 | 面向视频图像测量的虚拟标尺构建方法、系统、测量方法 |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200289228A1 (en) * | 2019-03-15 | 2020-09-17 | Ethicon Llc | Dual mode controls for robotic surgery |
| CN116643394B (zh) * | 2023-05-31 | 2024-08-02 | 深圳英美达医疗技术有限公司 | 光通量调节方法、装置、设备、存储介质和程序产品 |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2006187427A (ja) * | 2005-01-05 | 2006-07-20 | Pentax Corp | 電子内視鏡システム |
| JP2011031026A (ja) * | 2009-07-31 | 2011-02-17 | Karl Storz Imaging Inc | ワイヤレスカメラ接続 |
| JP2016054842A (ja) * | 2014-09-08 | 2016-04-21 | オリンパス株式会社 | 内視鏡画像表示装置、内視鏡画像表示方法及び内視鏡画像表示プログラム |
| JP2016198304A (ja) | 2015-04-10 | 2016-12-01 | オリンパス株式会社 | 内視鏡システム |
| WO2017047321A1 (ja) * | 2015-09-18 | 2017-03-23 | オリンパス株式会社 | 信号処理装置 |
| JP2017508529A (ja) | 2014-03-02 | 2017-03-30 | ブイ.ティー.エム.(バーチャル テープ メジャー)テクノロジーズ リミテッド | 内視鏡測定システム及び方法 |
| JP2017217215A (ja) | 2016-06-07 | 2017-12-14 | 公立大学法人広島市立大学 | 3次元形状計測装置及び3次元形状計測方法 |
| WO2018051680A1 (ja) | 2016-09-15 | 2018-03-22 | 富士フイルム株式会社 | 内視鏡システム |
| JP2020014807A (ja) * | 2018-07-27 | 2020-01-30 | 富士フイルム株式会社 | 内視鏡装置及びその作動方法並び内視鏡装置用プログラム |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6243364B2 (ja) * | 2015-01-26 | 2017-12-06 | 富士フイルム株式会社 | 内視鏡用のプロセッサ装置、及び作動方法、並びに制御プログラム |
| EP3318174A4 (en) * | 2015-06-26 | 2019-04-10 | Olympus Corporation | ENDOSCOPE POWER SUPPLY SYSTEM |
| EP3513703B1 (en) * | 2016-09-15 | 2021-04-28 | FUJIFILM Corporation | Measurement support device, endoscope system, and endoscope processor for endoscope system |
| EP3656274A4 (en) * | 2017-07-18 | 2020-07-15 | FUJIFILM Corporation | ENDOSCOPY DEVICE AND MEASUREMENT SUPPORT METHOD |
| JP7115897B2 (ja) * | 2018-04-20 | 2022-08-09 | 富士フイルム株式会社 | 内視鏡装置 |
-
2021
- 2021-03-08 EP EP21863868.2A patent/EP4209821B1/en active Active
- 2021-03-08 JP JP2022546878A patent/JP7569384B2/ja active Active
- 2021-03-08 CN CN202180053946.2A patent/CN116171398A/zh active Pending
- 2021-03-08 WO PCT/JP2021/008993 patent/WO2022049807A1/ja not_active Ceased
-
2023
- 2023-03-02 US US18/177,537 patent/US20230200682A1/en active Pending
-
2024
- 2024-10-04 JP JP2024175066A patent/JP2024177468A/ja active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2006187427A (ja) * | 2005-01-05 | 2006-07-20 | Pentax Corp | 電子内視鏡システム |
| JP2011031026A (ja) * | 2009-07-31 | 2011-02-17 | Karl Storz Imaging Inc | ワイヤレスカメラ接続 |
| JP2017508529A (ja) | 2014-03-02 | 2017-03-30 | ブイ.ティー.エム.(バーチャル テープ メジャー)テクノロジーズ リミテッド | 内視鏡測定システム及び方法 |
| JP2016054842A (ja) * | 2014-09-08 | 2016-04-21 | オリンパス株式会社 | 内視鏡画像表示装置、内視鏡画像表示方法及び内視鏡画像表示プログラム |
| JP2016198304A (ja) | 2015-04-10 | 2016-12-01 | オリンパス株式会社 | 内視鏡システム |
| WO2017047321A1 (ja) * | 2015-09-18 | 2017-03-23 | オリンパス株式会社 | 信号処理装置 |
| JP2017217215A (ja) | 2016-06-07 | 2017-12-14 | 公立大学法人広島市立大学 | 3次元形状計測装置及び3次元形状計測方法 |
| WO2018051680A1 (ja) | 2016-09-15 | 2018-03-22 | 富士フイルム株式会社 | 内視鏡システム |
| JP2020014807A (ja) * | 2018-07-27 | 2020-01-30 | 富士フイルム株式会社 | 内視鏡装置及びその作動方法並び内視鏡装置用プログラム |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP4209821A4 |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115311239A (zh) * | 2022-08-15 | 2022-11-08 | 合肥中纳医学仪器有限公司 | 面向视频图像测量的虚拟标尺构建方法、系统、测量方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP7569384B2 (ja) | 2024-10-17 |
| JPWO2022049807A1 (ja) | 2022-03-10 |
| EP4209821A1 (en) | 2023-07-12 |
| JP2024177468A (ja) | 2024-12-19 |
| US20230200682A1 (en) | 2023-06-29 |
| EP4209821A4 (en) | 2024-02-21 |
| CN116171398A (zh) | 2023-05-26 |
| EP4209821B1 (en) | 2025-12-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP2024177468A (ja) | 内視鏡システム及びその作動方法 | |
| JP6785941B2 (ja) | 内視鏡システム及びその作動方法 | |
| US20250029255A1 (en) | Image processing method, apparatus and device | |
| JP7278202B2 (ja) | 画像学習装置、画像学習方法、ニューラルネットワーク、及び画像分類装置 | |
| JP6360260B2 (ja) | 受動マーカーに基づく光学追跡方法およびシステム | |
| JP2020124541A (ja) | 内視鏡システム | |
| JP7115897B2 (ja) | 内視鏡装置 | |
| WO2020008834A1 (ja) | 画像処理装置、方法及び内視鏡システム | |
| WO2021044590A1 (ja) | 内視鏡システム、処理システム、内視鏡システムの作動方法及び画像処理プログラム | |
| JPWO2022049807A5 (ja) | ||
| JP7116264B2 (ja) | 内視鏡システム及びその作動方法 | |
| CN114286961B (zh) | 内窥镜系统及其工作方法 | |
| CN118615018A (zh) | 一种手术显微镜的图像导航方法及系统 | |
| JP7029359B2 (ja) | 内視鏡装置及びその作動方法並び内視鏡装置用プログラム | |
| JP7391113B2 (ja) | 学習用医療画像データ作成装置、学習用医療画像データ作成方法及びプログラム | |
| US12029386B2 (en) | Endoscopy service support device, endoscopy service support system, and method of operating endoscopy service support device | |
| CN108498079A (zh) | 荧光污染环境中血管、淋巴管及淋巴结的识别方法及设备 | |
| JPWO2020153186A1 (ja) | 内視鏡装置 | |
| JPWO2021029277A5 (ja) | ||
| JP7447249B2 (ja) | テストチャート、検査システム、及び検査方法 | |
| JP7741004B2 (ja) | 内視鏡システム及びその作動方法 | |
| JP5460490B2 (ja) | 眼科装置 | |
| WO2022049806A1 (ja) | キャリブレーション装置及び方法 | |
| HIURA et al. | 3D endoscopic system based on active stereo method for shape measurement of biological tissues and specimen |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21863868 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2022546878 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2021863868 Country of ref document: EP Effective date: 20230403 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 2021863868 Country of ref document: EP |