[go: up one dir, main page]

WO2022163188A1 - Dispositif de traitement d'images, procédé de traitement d'images et système de microscope chirurgical - Google Patents

Dispositif de traitement d'images, procédé de traitement d'images et système de microscope chirurgical Download PDF

Info

Publication number
WO2022163188A1
WO2022163188A1 PCT/JP2021/046452 JP2021046452W WO2022163188A1 WO 2022163188 A1 WO2022163188 A1 WO 2022163188A1 JP 2021046452 W JP2021046452 W JP 2021046452W WO 2022163188 A1 WO2022163188 A1 WO 2022163188A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eyeball
display
surgical
surgical field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2021/046452
Other languages
English (en)
Japanese (ja)
Inventor
知之 大月
雄生 杉江
潤一郎 榎
浩司 鹿島
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corp filed Critical Sony Group Corp
Publication of WO2022163188A1 publication Critical patent/WO2022163188A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/13Ophthalmic microscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F9/00Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
    • A61F9/007Methods or devices for eye surgery

Definitions

  • the present disclosure relates to an image processing device, an image processing method, and a surgical microscope system.
  • intraocular lens As a method of refractive correction in ophthalmology, by inserting an artificial lens called an intraocular lens (IOL) into the eye, it is widely used to eliminate the refractive error of the crystalline lens and improve visual functions such as visual acuity. It is done.
  • the most widely used intraocular lens is an intraocular lens that is inserted into the lens capsule as a replacement for the lens removed by cataract surgery.
  • various intraocular lenses such as those that are fixed (dwelled) in the ciliary sulcus (Phakic IOL).
  • Patent Literature 1 proposes a technique of changing the position of a mark (pattern) indicating a preoperative plan according to the result of eyeball tracking.
  • the present disclosure proposes an image processing device, an image processing method, and a surgical microscope system capable of performing surgery in accordance with a preoperative plan with high precision.
  • An image processing apparatus includes an image input unit that receives an operating field image of a patient's eye, tracks an eyeball in the operating field image, and detects displacement of the eyeball in the operating field image. an eyeball tracking unit; and a display image generation unit that converts the surgical field image based on the displacement of the eyeball and superimposes a mark on the converted surgical field image to generate a display image, wherein the display image generation unit generates the display image by positioning the mark at a predetermined position on the display screen regardless of the conversion of the operative field image.
  • An image processing apparatus includes an image input unit that receives an operating field image of a patient's eye, tracks an eyeball in the operating field image, and detects displacement of the eyeball in the operating field image.
  • an eyeball tracking unit for setting a plurality of luminance regions having different luminances in the operative field image, transforming the operative field image based on the displacement of the eyeball, and adding a boundary between the plurality of luminous regions to the transformed operative field image; and a display image generation unit that generates a display image by superimposing the display image, wherein the display image generation unit positions the boundary at a predetermined position on the display screen and generates the display image without depending on the conversion of the surgical field image. Generate.
  • an image processing device receives an operative field image of a patient's eye, tracks an eyeball in the operative field image, and detects displacement of the eyeball in the operative field image. and transforming the surgical field image based on the displacement of the eyeball, and generating a display image by superimposing a mark on the transformed surgical field image, wherein the image processing device converts the surgical field image , the display image is generated by positioning the mark at a predetermined position on the display screen.
  • a surgical microscope system includes a surgical microscope that obtains an surgical field image of a patient's eye, an image processing device that generates a display image, and a display device that displays the display image, wherein the image is
  • the processing device includes an image input unit that receives the surgical field image, an eye tracking unit that tracks an eyeball in the surgical field image and detects displacement of the eyeball in the surgical field image, and an eyeball displacement based on the eyeball displacement.
  • a display image generation unit for generating a display image by superimposing a mark on the converted operation field image, the display image generation unit generating a display image based on the conversion of the operation field image; First, the display image is generated by positioning the mark at a predetermined position on the display screen.
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope system according to a first embodiment
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope according to a first embodiment
  • FIG. 1 is a diagram showing an example of a schematic configuration of an image processing apparatus according to a first embodiment
  • FIG. 4 is a diagram showing Example 1 of a display image according to the first embodiment
  • FIG. 10 is a diagram showing example 2 of a display image according to the first embodiment
  • FIG. 10 is a diagram showing example 3 of a display image according to the first embodiment
  • FIG. 11 is a diagram showing Example 4 of a display image according to the first embodiment
  • FIG. 11 is a diagram showing example 5 of a display image according to the first embodiment
  • FIG. 11 is a diagram showing example 6 of a display image according to the first embodiment;
  • FIG. 11 is a diagram showing example 7 of a display image according to the first embodiment;
  • FIG. 11 is a diagram showing an example 8 of a display image according to the first embodiment;
  • FIG. 11 is a first diagram showing Example 9 of a display image according to the first embodiment;
  • FIG. 12 is a second diagram showing example 9 of a display image according to the first embodiment;
  • 10A and 10B are diagrams showing an example 10 of a display image according to the first embodiment;
  • FIG. 1 is a diagram showing an example of a schematic configuration of an image processing apparatus according to a second embodiment;
  • FIG. 11 is a diagram showing an example of a display image including a target for instructing the start of surgery according to the second embodiment;
  • FIG. 11 is a flow chart showing an example of a technique start instruction process according to the second embodiment;
  • FIG. 10 is a diagram showing an example 1 of the shape of a surgical start instruction target according to the second embodiment;
  • FIG. 10 is a diagram showing an example 2 of the shape of an object to be instructed to start a surgical operation according to the second embodiment;
  • FIG. 10 is a diagram showing an example 3 of the shape of an object for which an operation start instruction is given according to the second embodiment;
  • 1 is a diagram showing an example of a schematic configuration of a computer according to each embodiment of the present disclosure;
  • First Embodiment 1-1 Example of schematic configuration of surgical microscope system 1-2.
  • Example of schematic configuration of surgical microscope 1-3 Schematic Configuration of Image Processing Apparatus and Example of Image Processing 1-4. Action and effect 2.
  • Second Embodiment 2-1 Example of operation start instruction processing 2-2. Action and effect 3.
  • FIG. 1 is a diagram showing an example of a schematic configuration of a surgical microscope system 1 according to the first embodiment.
  • the surgical microscope system 1 has a surgical microscope 10 and a patient bed 20.
  • This surgical microscope system 1 is a system used for eye surgery. The patient undergoes eye surgery while lying on the patient bed 20 . An operator, who is a doctor, performs surgery while observing the patient's eye through the surgical microscope 10 .
  • the surgical microscope 10 has an objective lens 11, an eyepiece lens 12, an image processing device 13, and a monitor 14.
  • the objective lens 11 and the eyepiece lens 12 are lenses for magnifying and observing the eye of the patient to be operated.
  • the image processing device 13 outputs various images, various information, etc. by performing predetermined image processing on the image captured through the objective lens 11 .
  • the monitor 14 displays an image captured through the objective lens 11, various images generated by the image processing device 13, various information, and the like. This monitor 14 may be provided separately from the surgical microscope 10 .
  • the operator looks into the eyepiece 12 and performs surgery while observing the patient's eye through the objective lens 11. Further, the operator performs surgery while confirming various images (for example, an image before image processing, an image after image processing, etc.) and various information displayed on the monitor 14 .
  • various images for example, an image before image processing, an image after image processing, etc.
  • FIG. 2 is a diagram showing an example of a schematic configuration of the surgical microscope 10 according to the first embodiment.
  • the surgical microscope 10 includes, in addition to the objective lens 11, the eyepiece lens 12, the image processing device 13, and the monitor 14, a light source 51, an observation optical system 52, a front image capturing unit 53, a tomographic It has an image capturing unit 54 , a presentation unit 55 , an interface unit 56 and a speaker 57 .
  • the monitor 14 and the presentation unit 55 correspond to display devices.
  • the light source 51 emits illumination light under the control of the control unit 13A included in the image processing device 13 to illuminate the eyes of the patient.
  • the observation optical system 52 is composed of optical elements such as the objective lens 11, a half mirror 52a, and lenses (not shown).
  • the observation optical system 52 guides the light (observation light) reflected from the patient's eye to the eyepiece 12 and the front image capturing section 53 .
  • the light reflected from the patient's eye enters the half mirror 52a as observation light via the objective lens 11, a lens (not shown), or the like.
  • Approximately half of the observation light incident on the half mirror 52 a passes through the half mirror 52 a as it is, and enters the eyepiece 12 via the transmission type presentation unit 55 .
  • the other half of the observation light incident on the half mirror 52 a is reflected by the half mirror 52 a and enters the front image capturing section 53 .
  • the front image capturing unit 53 is composed of, for example, a video camera.
  • the front image photographing unit 53 receives the observation light incident from the observation optical system 52 and photoelectrically converts it to obtain an image of the patient's eye observed from the front, that is, an image of the patient's eye photographed approximately in the eye axis direction. A front image is taken.
  • the front image capturing unit 53 captures (captures) a front image under the control of the image processing device 13 and supplies the obtained front image to the image processing device 13 .
  • the tomographic image capturing unit 54 is configured by, for example, an optical coherence tomography (OCT), a Scheimpflug camera, or the like.
  • OCT optical coherence tomography
  • the tomographic image capturing unit 54 captures (pictures) a tomographic image, which is a cross-sectional image of the patient's eye, under the control of the image processing device 13 and supplies the obtained tomographic image to the image processing device 13 .
  • a tomographic image is an image of a cross section of a patient's eye in a direction substantially parallel to the eye axis direction.
  • the tomographic image capturing unit 54 acquires a tomographic image, for example, using infrared light based on the principle of interference. may be a common optical path.
  • the eyepiece 12 condenses the observation light incident from the observation optical system 52 via the presentation unit 55 and forms an optical image of the patient's eye. An optical image of the patient's eye is thereby observed by the operator looking through the eyepiece 12 .
  • the presentation unit 55 is composed of a transmissive display device or the like, and is arranged between the eyepiece 12 and the observation optical system 52 .
  • the presentation unit 55 transmits observation light incident from the observation optical system 52 and makes it enter the eyepiece 12, and also displays various images (for example, a front image, a tomographic image, etc.) and various information supplied from the image processing device 13. are also presented (displayed) as necessary.
  • various images, various information, and the like may be presented, for example, superimposed on the optical image of the patient's eye, or may be presented in the periphery of the optical image so as not to interfere with the optical image.
  • the image processing device 13 has a control section 13A that controls the operation of the surgical microscope 10 as a whole.
  • the control section 13A changes the illumination conditions of the light source 51 or changes the zoom magnification of the observation optical system 52 .
  • the control unit 13A controls image acquisition by the front image capturing unit 53 and the tomographic image capturing unit 54 based on the operation information of the operator or the like supplied from the interface unit 56 and the like.
  • the interface unit 56 is composed of, for example, a communication unit and the like.
  • the communication unit receives commands from an operation unit such as a touch panel superimposed on the monitor 14, a controller, a remote controller (not shown), or the like, and communicates with external devices.
  • the interface unit 56 supplies the image processing apparatus 13 with information and the like according to the operation of the operator.
  • the interface unit 56 also outputs device control information and the like for controlling the external device supplied from the image processing apparatus 13 to the external device.
  • the monitor 14 displays various images such as a front image and various information on the display screen in accordance with the control by the control unit 13A of the image processing device 13 .
  • the speaker 57 emits a buzzer sound, a melody sound, or the like in order to notify the operator or the like of the dangerous situation. Outputs sound, message (voice), and the like.
  • the surgical microscope 10 may be provided with a rotating light or indicator light (lamp) for informing the operator or the like of a dangerous situation.
  • the preoperative planning mark instead of moving the preoperative planning mark (marker) in accordance with the movement of the eye, the preoperative planning mark is fixed, and real-time scanning is performed so as to match the fixed mark.
  • the operator By geometrically transforming the operative field image and displaying it, the operator can easily perform detailed positioning and posture setting of implants such as intraocular lenses, so that surgery can be performed accurately according to the preoperative plan. can be realized.
  • FIG. 3 is a diagram showing an example of a schematic configuration (configuration and processing flow) of the image processing apparatus 13 according to the first embodiment.
  • the image processing apparatus 13 includes a preoperative plan receiving unit 13a, an image input unit 13b, a registration unit 13c, an information storage unit 13d, an eyeball tracking unit (eyeball tracking unit) 13e, and a display image generator 13f.
  • the preoperative plan receiving unit 13a receives preoperative plan information for the patient's eye (for example, preoperative images of the preoperative plan, posture information of marks based on the preoperative plan, etc.).
  • the orientation information of the mark includes information (size information and position information, orientation information, etc.).
  • the orientation around the eye axis is defined by the angle in the direction of rotation around the eye axis with respect to a reference line orthogonal to the eye axis.
  • the position of the mark in the coordinate system and the position in the rotational direction around the eye axis both correspond to the positional information of the mark.
  • the image input unit 13b receives an operating field image (front image) from the front image capturing unit 53 (see FIG. 2), and receives the operating field image (for example, an operating field image at the start of surgery or a real-time image during surgery). surgical field image, etc.) to the registration unit 13c, the eyeball tracking unit 13e, the display image generation unit 13f, and the like.
  • an operating field image front image
  • the operating field image for example, an operating field image at the start of surgery or a real-time image during surgery.
  • surgical field image etc.
  • the registration unit 13c compares the preoperative image of the preoperative plan with the operative field image at the start of the operation to determine the correspondence relationship between the preoperative image of the preoperative plan and the operative field image at the start of the operation, such as the amount of deviation, Find the direction of deviation. Then, the registration unit 13c supplies the calculated amount of misalignment and misalignment information (positional relationship information) regarding the misalignment direction to the information storage unit 13d together with the surgical field image at the start of the preoperative operation.
  • the information accumulation unit 13d converts (changes) the posture information of the mark in accordance with the surgical field image at the start of surgery based on the deviation information and the surgical field image at the start of the surgery supplied from the registration unit 13c, An image of the surgical field at the start of the operation and posture information of the mark converted in accordance with the image of the surgical field at the start of the operation are accumulated.
  • the eyeball tracking unit 13e tracks the eyeball in the real-time surgical field image by comparing the surgical field image at the start of the surgery and the real-time surgical field image. Further, the eyeball tracking unit 13e stores displacement information indicating the difference (for example, the amount of displacement and the direction of displacement) between the eyeball orientation information in the real-time surgical field image and the mark orientation information accumulated by the information accumulation unit 13d. It is supplied to the display image generation unit 13f. Similar to mark posture information, eyeball posture information includes information (size information, position information, orientation information, etc.) related to the eyeball size, eyeball position, and orientation around the eyeball (position in the rotational direction around the eyeball). )including. However, both the position of the eyeball in the coordinate system and the position in the direction of rotation about the eyeball correspond to the positional information of the eyeball.
  • the display image generation unit 13f eliminates the positional change of the eyeball with respect to the mark in the fixed orientation (fixed position, fixed orientation, etc.) based on the converted mark orientation information.
  • the posture (position, orientation, etc.) of the real-time operating field image is changed, and a display image is generated by superimposing a fixed posture mark on the real-time operating field image with the changed posture.
  • preoperative images and images at the beginning of surgery are registered, and then comparison (tracking) between the images at the beginning of surgery and real-time images is performed.
  • Preoperative planning marks are mapped onto real-time images (real-time operative field images).
  • the preoperatively planned marks move with eye movements. For this reason, the operator performs wound creation, anterior capsulotomy, axis alignment of a toric IOL (intraocular lens for correcting astigmatism), centering of the IOL, etc. while referring to the moving index. It is difficult to perform surgery well according to the preoperative plan. Therefore, instead of moving the preoperative plan mark according to the eyes, the real-time image is geometrically transformed and displayed so that it fits the fixed preoperative plan mark. to realize
  • FIG. 4 is a diagram showing Example 1 of a display image according to the first embodiment.
  • two triangular marks M1 are presented fixedly.
  • Each mark M1 in this fixed posture is used as a reference, and according to the direction and amount of movement of the eyeball, the real-time operative field image G1 is displayed in the direction opposite to the direction of movement of the eyeball with respect to each mark M1 in the fixed posture. It is converted (changed) so that it moves by the amount of movement.
  • Each mark M1 in a fixed posture is superimposed on the real-time operating field image G1 to generate a display image.
  • This display image is displayed on the display screen by both or one of the monitor 14 and the presentation unit 55 .
  • two marks M1 are provided on a straight line orthogonal to the eye axis with the center of the eye axis in between. Some of these marks M1 are located on the iris A1.
  • Each mark M1 is a mark for alignment of the intraocular lens B1 such as a toric IOL that corrects astigmatism (a target mark for installing the intraocular lens B1).
  • Two marks B1a of the intraocular lens B1 are aligned with these marks M1.
  • the intraocular lens B1 is a toric IOL
  • a sufficient astigmatism correction effect cannot be obtained when a deviation occurs. Therefore, two marks B1a (for example, dotted lines) indicating the toric axis are engraved at the end points of the toric IOL, so that the orientation of the toric IOL around the eye axis can be grasped.
  • the toric IOL mark B1a is aligned with the mark M1 in the real-time surgical field image G1, and the toric IOL is placed in the eye.
  • the mark M1 which is a pattern indicating the preoperative plan
  • the operative field image G1 including the tracked eyeball is displayed so that the fixedly presented mark M1 has an appropriate posture (position, orientation, etc.).
  • the operation can be performed while observing the preoperatively planned mark M1 and the eyeball, etc., which do not move. Since the operation is performed while observing the fixed target, it is possible to perform the operation in accordance with the preoperative plan with high accuracy.
  • the orientation (position, orientation, etc.) of the preoperative planning mark M1 may be fixed to the orientation of the preoperative planning mark that matches the operative field image G1 at the start of the operation, or may be fixed to the orientation of the preoperative planning mark M1. It may be aligned with the orientation of the planning marks, or it may be a specific orientation predetermined by preoperative planning (eg, the axis of the toric IOL may be horizontal or vertical).
  • the mark display for axis alignment of the toric IOL was explained as an example, but the same mark display method can be used to support surgery according to the preoperative plan for wound creation. That is, it is possible to indicate in which direction around the eye axis the incision is to be made at the corneal limbus and at a slightly inner or slightly outer position along the preoperative plan with a triangular mark or the like.
  • FIG. 5 is a diagram showing example 2 of a display image according to the first embodiment.
  • a straight line L1 passing through each mark M1 and a straight line L2 passing through each mark B1a of the intraocular lens B1 may be presented.
  • a straight line L1 is a target line for installing the intraocular lens B1.
  • the amount of deviation between the straight lines L1 and L2 (the amount of deviation in the rotational direction around the eye axis) may be measured by image processing, and the measured amount of deviation may be presented as the angle ⁇ . This amount of deviation may be presented in real time.
  • the operator places the intraocular lens B1 in the eye so that the amount of displacement is zero.
  • the straight line L1 and the straight line L2 are presented in a color that stands out against the background color of the surgical field image G1 and is easily visible to the operator.
  • wound creation for example, the amount of deviation between the direction around the eye axis for creating the wound in the preoperative plan and the direction around the eye axis of the tip of a surgical tool such as a knife for creating the wound may be presented.
  • FIG. 6 is a diagram showing Example 3 of a display image according to the first embodiment.
  • a circular mark M2 may be fixed and presented.
  • the mark M2 is a target circle for anterior capsulorhexis and has a circular shape centered on the eye axis. Therefore, unlike the case of FIG. 4, the original image, which is the real-time surgical field image G1, may not be rotated. Similarly, when displaying a circle (circle centered on the eye axis, etc.) or a point (point indicating the eye axis, etc.) during centering of the intraocular lens B1, the original image may not be rotated. .
  • the shape of the mark M2 is not limited to a circular shape, and may be, for example, a ring shape.
  • a ring shape such as a circular shape, it is possible to use the center of the corneal limbus, the center of the pupil, the center of the preoperative pupil, the visual axis, the center of the anterior capsulotomy margin, etc., in addition to the eye axis.
  • the amount of deviation (distance) between the mark M2, which is the target circle for anterior capsulotomy, and the actual anterior capsulotomy position may be measured by image processing, and the measured amount of deviation may be presented. This amount of deviation may be presented in real time. The operator performs an anterior capsulorhexis so that the displacement amount becomes zero.
  • FIG. 7 is a diagram showing Example 4 of a display image according to the first embodiment.
  • the original image G2 which is a real-time surgical field image (without the mark M1), may be presented in parallel with the converted surgical field image G1 (with the mark M1).
  • the size and arrangement of the juxtaposed operative field image G1 and original image G2 may be adjusted according to the operator's preference.
  • One of the operative field image G1 and the original image G2 may be enlarged or reduced, or both may be enlarged or reduced with different scales.
  • the operator can operate an operation unit such as a touch panel or a controller connected to the interface unit 56 to enlarge or reduce the image.
  • the eyeball when performing eye surgery, the eyeball may move due to the operation of the eye with surgical tools or the action of the patient's eye muscles. If the operation is performed while referring to the surgical field image that has been fixed by eliminating the movement of the eyeball in accordance with the preoperative planning marks, it may be difficult to understand the actual movement of the eye and surgical tools in the surgical field. For this reason, as described above, by presenting the original image G2, which is a real-time surgical field image, side by side with the converted surgical field image G1, it is possible to understand the movement of the eye and the surgical tool in the actual surgical field. It can be done easily.
  • FIG. 8 is a diagram showing example 5 of a display image according to the first embodiment. As shown in FIG. 8, only part of the real-time surgical field image G1 may be displayed in order to reduce the difficulty of viewing. For example, a display image (image) within the corneal region of the operating field image G1 in real time is presented.
  • the image frame and the like move when the result of the geometric transformation of the entire original image, which is the operative field image G1, is displayed.
  • Some operators may feel that the operative field image G1 is difficult to see.
  • a mask may be added that displays only a portion of the surgical field image G1 and does not display the other portions. This mask may be set, for example, so that the center of the cornea in the surgical field image G1 is the center of the mask.
  • Display image examples 6 and 7 In the display image example 4, the original image G2 is juxtaposed to make it easier to see the movements of the eyes, but other methods may be used to make the movements of the eyes easier to see.
  • FIG. 9 is a diagram showing Example 6 of the display image according to the first embodiment.
  • the actual movement of the eye may be represented on the surgical field image G1 of the fixed eye by, for example, arrows indicating representative movement directions, or may be represented by a predetermined grid pattern. Such arrows or line segments may be displayed with respect to the coordinates. Additional information indicating the movement of the eyeball, such as arrows and line segments, may be superimposed on the real-time surgical field image G1.
  • the display image generation unit 13f generates additional information based on information (such as a motion vector) regarding intra-eye movement obtained by the eyeball tracking unit 13e.
  • FIG. 10 is a diagram showing example 7 of the display image according to the first embodiment.
  • an original image G2 which is an operative field image enlarged (or reduced) so as to be larger in scale than the operative field image G1
  • the operative field image G1 to which the mark M1 is fixed may be superimposed on the original image G2.
  • FIGS. 9 and 10 described above and the technique of FIG. 7 may be used together. Moreover, it is possible to appropriately combine the methods of FIGS. 4 to 10 .
  • the display image generation unit 13f may continue the display by maintaining the orientation of the image at the time when the orientation was last estimated.
  • the posture of the operating field image G1 at the time when the posture was finally estimated can be changed to a constant velocity, constant angular velocity, constant acceleration motion, or constant angular motion. Accelerated motion may be maintained.
  • the display of the image may be changed, such as by changing the color of the mark so that the failure can be recognized.
  • the surgical field image G1 to which the mark M1 is fixed may be presented in 2D (two dimensions), or may be presented in 3D (three dimensions). This makes it possible for the operator to easily perform surgery when referring only to the surgical field image G1 to which the mark M1 is fixed.
  • the same geometric transformation may be applied to the image for the left eye and the image for the right eye.
  • the original picture G2 may be presented in 2D, it may also be presented in 3D. In this case as well, it is possible for the operator to easily perform the procedure when referring to the original image G2.
  • Both presentation of the operative field image G1 to which the mark M1 is fixed and presentation of the original image G2 may be performed in 2D or 3D, or either one of them may be performed in 2D or 3D.
  • FIG. 11 is a diagram showing Example 8 of a display image according to the first embodiment.
  • two brightness areas with different brightness are set, and a boundary M3 between these brightness areas is fixed and presented.
  • This boundary M3 functions as a line-shaped boundary, that is, a line boundary (a target line for installing the intraocular lens B1).
  • the luminance of the right luminance area (the shaded area in FIG. 11) of the two luminance areas is set lower than the luminance of the left luminance area.
  • a toric axis is aligned with this boundary M3, and a toric IOL is installed.
  • the number of luminance regions is not limited to two, and may be two or more.
  • the boundary M3 of luminance change is fixed, and the operative field image G1 is in an appropriate posture (position, orientation, etc.) with respect to the fixedly presented boundary M3 (for example, a fixed posture).
  • a process of changing the posture of the surgical field image G1 is performed so that the displacement of the eyeball in the surgical field image G1 is eliminated with respect to the boundary M3. That is, by fixing the boundary M3 and changing the posture of the surgical field image G1, the positional relationship between the eyeball and the boundary M3 does not change.
  • the operative field image G1 is fixed, and the boundary M3 is set in an appropriate posture (position, orientation, etc.) with respect to the fixedly presented operative field image G1 (
  • the posture of the boundary M3 may be changed so that the boundary M3 is not displaced with respect to the eyeball in the operative field image G1 in the fixed posture.
  • Changing the posture of this boundary M3 means changing the range (for example, size, shape, etc.) of each boundary region.
  • the display image generation unit 13f changes the posture of the boundary M3 according to the displacement of the eye based on the posture information of the eye, while displaying the display image.
  • the display image generator 13f moves the boundary M3 with respect to the real-time operating field image G1 in the direction of movement of the eyeball by the above-mentioned amount of movement in accordance with the direction and amount of movement of the eyeball, and the posture of the boundary M3 ( For example, the range of each luminance area) is changed. That is, by fixing the surgical field image G1 and changing the posture of the boundary M3, the positional relationship between the eyeball and the boundary M3 does not change.
  • Display image example 9 are diagrams showing Example 9 of the display image according to the first embodiment.
  • two brightness areas with different brightness are set and presented as a boundary M4 between these brightness areas.
  • This boundary M4 functions as a boundary of a shape having a semicircle, ie a semicircle boundary (a semicircle for forming a target circle for the anterior capsulotomy). Note that in the examples of FIGS. 12 and 13, the boundary M4 of the luminance region is rotated 90 degrees about the eye axis or the like.
  • the boundary M4 of the luminance region is 360 degrees around the eye axis or the like at a predetermined speed (for example, the speed when the operator moves the tip of the surgical tool) from the start of the surgery. Rotate. Boundary M4 thereby forms a target circle for the anterior capsulotomy.
  • the predetermined speed is set in advance, and is, for example, a general value such as an average value of speeds when the operator moves the distal end of the surgical tool.
  • the rotation speed of the boundary M4 may not be a predetermined speed.
  • Boundary M4 may be rotated according to the motion of the end point.
  • a processing start section 13g which will be described later, can be used to detect the distal end of the surgical instrument and the end point of the anterior capsulorhexis edge.
  • the rotation angle of the boundary M4 may be another angle such as 180 degrees.
  • FIG. 14 is a diagram showing example 10 of a display image according to the first embodiment. As shown in FIG. 14, in addition to the boundaries M4 shown in FIGS. 12 and 13, a plurality of (two in the example of FIG. 14) boundaries M5 are presented. These boundaries M4 and M5 are formed by boundaries between two luminance regions having different luminances, as in example 10 of the display image. A boundary M5 is a boundary indicating an incision position.
  • the posture and position of the boundary M4 and the boundary M5 are fixed (however, the rotation of the boundary M4 according to the movement of the surgical tool and the edge of the anterior capsulorhexis is excluded).
  • the operative field image G1 may be moved so that the posture (position, orientation, etc.) of the image G1 follows this. You may make it move.
  • the boundaries M3 to M5 are not marks that are displayed superimposed on the operative field image, but rather boundaries that make it possible to indicate visually recognized positions and postures. Unlike the superimposed marks, the boundaries M3 to M5 do not hide the operative field image at the positions of the marks, so that the visibility of the operative field is improved as compared with the case where the superimposed marks are used.
  • Various display images as described above are used, and these display images may be selectable by the operator, staff, or the like. Selection of a display image is realized by an input operation to the operation unit by an operator, staff, or the like. For example, the operator, staff, or the like operates the operation unit to select a display mode for displaying a desired display image. According to this selection, the display image generator 13f generates a display image based on the selected display mode. Similarly, regarding various images, the size, position, etc. of the images may be changed by the operator, staff, or the like. The display image generation unit 13f generates a display image by changing the size, position, etc. of the image according to the input operation to the operation unit by the operator, staff, or the like.
  • the image input unit 13b receives an operating field image (for example, operating field image G1, etc.) for the patient's eye
  • the eyeball tracking unit 13e receives The eyeball is tracked, the displacement of the eyeball in the operative field image is detected, the display image generation unit 13f converts the operative field image based on the displacement of the eyeball, and the converted operative field image is marked (for example, mark M1, M2, etc.), the display image is generated by positioning the mark at a predetermined position on the display screen regardless of the conversion of the surgical field image.
  • the mark can be fixed and displayed at a predetermined position instead of being moved according to the movement of the eye.
  • the displacement includes any change to the subject such as the eyeball, such as parallel movement, rotation, enlargement/reduction, deformation, and combinations thereof.
  • the preoperative plan receiving unit 13a receives the preoperative image based on the preoperative plan for the patient's eye and the position information of the mark (for example, the position and orientation on the coordinates), and the information storage unit 13d receives the preoperative By comparing the image with the operative field image at the start of the operation, the position information of the marks is converted according to the operative field image at the start of the operation, and the operative field image at the start of the operation and the converted mark position information are accumulated.
  • the eyeball tracking unit 13e tracks the eyeball in the real-time operative field image by comparing the operative field image at the start of the operation with the real-time operative field image, and obtains the position information of the eyeball in the real-time operative field image ( For example, the position and direction on the coordinates) and the converted position information of the mark are output, and the display image generation unit 13f is fixed at a predetermined position based on the converted position information of the mark.
  • a display image is generated by changing the position of the real-time surgical field image based on the displacement information so as to eliminate the positional change of the eyeball with respect to the mark.
  • preoperative planning marks to be fixed, rather than moving with eye movements, and the position of the real-time operative field image (e.g., coordinate position, orientation, etc.) can be changed and displayed. Therefore, it becomes easier for the operator to visually recognize the mark, and it becomes easier to perform detailed positioning and posture setting of the implant such as the intraocular lens, so that the surgery can be performed with high accuracy according to the preoperative plan. .
  • position of the real-time operative field image e.g., coordinate position, orientation, etc.
  • the display image generation unit 13f generates a display image by arranging the image obtained by superimposing the mark on the converted operative field image and the real-time operative field image received by the image input unit 13b. This makes it easy to understand the movements of the eye and surgical tools in the actual surgical field, so that surgery can be performed with high precision according to the preoperative plan.
  • the display image generation unit 13f generates a display image by superimposing an image obtained by superimposing a mark on the converted operative field image and a real-time operative field image received by the image input unit 13b.
  • the display image generation unit 13f enlarges or reduces both or one of the image in which the mark is superimposed on the converted operative field image and the real-time operative field image received by the image input unit 13b. This makes it easier to understand the movements of the eye and the surgical instrument in the actual surgical field, so that the surgery can be performed more accurately according to the preoperative plan.
  • the display image generation unit 13f generates, as a display image, an image of the corneal region in the converted surgical field image, or an image of a region including the cornea and the corneal periphery in the converted surgical field image. As a result, it is possible to reduce the feeling that the displayed image is difficult to see due to the movement of the image frame, etc. when displaying the entire surgical field image in real time. can be realized.
  • the display image generation unit 13f generates a display image by superimposing additional information indicating the movement of the eyeball on the converted surgical field image.
  • the display image generation unit 13f maintains the display image before the eyeball is detached. As a result, it is possible to avoid interruption of the surgery due to the disappearance of the display image, so that the surgery can be performed with high precision according to the preoperative plan.
  • the display image generation unit 13f generates the three-dimensional display image. As a result, even when the operator refers only to the surgical field image to which the mark is fixed, the surgical operation can be facilitated, so that the surgical operation according to the preoperative plan can be performed with high accuracy.
  • the display image generation unit 13f sets a plurality of luminance regions having different luminances in the operating field image, and the boundary between the plurality of luminance regions (for example, boundary M3 to M5, etc.) are replaced with marks and positioned to generate a display image.
  • the marks in the actual operative field can be easily grasped without hiding the operative field with the superimposed marks, so that the surgery according to the preoperative plan can be performed with higher accuracy.
  • the display image generation unit 13f when the mark (for example, the mark M2, etc.) has a shape (for example, a circular shape, etc.) that does not have a posture (orientation), the display image generation unit 13f generates a Limit and avoid transforming the operative field image. As a result, execution of unnecessary processing can be avoided, so processing speed can be improved.
  • FIG. 15 is a diagram showing an example of the schematic configuration of the image processing device 13 according to the second embodiment.
  • FIG. 16 is a diagram showing an example of a display image including a surgical start instruction target according to the second embodiment.
  • FIG. 17 is a flow chart showing an example of a technique start instruction process according to the second embodiment.
  • the image processing device 13 includes a processing starter 13g in addition to the units 13a to 13f according to the first embodiment.
  • the processing start unit 13g starts the processing at the start of the operation (for example, permits the start of the processing at the start of the operation).
  • a surgical tool used for surgery or various instruments can be used as the surgical start instruction target C1.
  • the processing start unit 13g functions as an image recognition unit that detects the surgical start instruction object C1 by image recognition processing.
  • step S11 it is determined whether or not the processing start unit 13g has detected the operation start instruction target C1 in the surgical field.
  • step S11 when the surgical start instruction target C1 is detected in the surgical field (YES in step S11), registration is performed by the registration unit 13c in step S12.
  • step S13 other processing at the start of surgery is performed by the information storage unit 13d and the like, and in step S14, transition to the guidance display mode is performed.
  • the image processing apparatus 13 detects the surgical start instruction object C1 from the surgical field image G1
  • the surgical field image G1 at the detection timing is acquired as the surgical start image
  • the acquired surgical start image is Registration, which is one of the processing at the start of the operation, is performed using this, and other processing at the start of the operation is performed.
  • the surgical start instruction target C1 such as a surgical tool
  • the surgical start instruction target C1 is detected by the processing start unit 13g.
  • the surgical field image G1 at the detection timing is used as an image at the start of surgery, and other processing at the start of surgery is executed. Therefore, by enabling the operator to issue an instruction to start the operation by inserting the object C1 for instructing the start of the operation into the surgical field, complication of the workflow and occurrence of unintended results by the operator can be suppressed. be able to.
  • the processing start unit 13g that functions as an image recognition unit capable of specifying the start timing by the operator's participation in the image, the operator can insert the image into the operative field (camera image). It will be possible to automatically detect the target to be processed and execute various processes. In addition, since it is possible to instruct the start of the processing by the operation of the operator, and there is no need to request the staff to operate, the workflow can be simplified.
  • FIG. 17 shows registration, other processing at the start of surgery, and transition to the guidance display mode (various display modes described in the first embodiment) when the surgical start instruction object C1 is detected in the surgical field. Examples of implementation are shown, but the processing to be implemented is not limited to them, and only a part of them may be implemented. Further, the timing of detection of the surgical start instruction target C1 may be processed so as to be, for example, the time when the surgical start instruction target C1 reaches a specific position such as the center of the cornea.
  • the detection process may be performed on the assumption that the insertion direction of the target C1 for instructing the start of surgery is a specific direction. Knowing the insertion direction can improve detection accuracy.
  • the direction of insertion of the surgical start indication target C1 may be limited to be from above or below the eye.
  • the part of the eye hidden by the target C1 for instructing the start of surgery in the image at the start of surgery cannot be used for comparison with the preoperative image.
  • the insertion direction of the target C1 for instructing the start of surgery is limited to the upper or lower direction of the eye, the range of the hidden portion that overlaps with the portion hidden by the eyelid in the preoperative image increases, and the registration processing is delayed. Since the adverse effect is reduced, the registration accuracy can be improved.
  • the preoperative image and the real-time surgical field image G1 the upper and lower parts of the eyeball are often hidden by the eyelid.
  • the type of object C1 desired by the operator such as a pestle or side port knife that is often used immediately after the start of the operation, is determined, and an image for each type is supplied to the processing start unit 13g before the operation. You can let it learn by doing it. As a result, it is possible to increase the detection accuracy of the surgical start instruction target C1 while improving convenience for the operator.
  • various neural networks such as a convolutional neural network can be used.
  • an ophthalmic surgery guidance system may be configured by adding a specific instrument or the like to the ophthalmic surgery guidance device as a surgical start instruction object C1.
  • the color of the surgical start instruction target C1 is the color of the eyeball (for example, the color of the white of the eye, the color of the blood vessel, the color of the iris, the color of the pupil during surgery) so that it can be easily distinguished from the eyeball that is the background image during the detection process. color, etc.) may be used.
  • the color of the surgical start instruction target C1 is the color of the eyeball (for example, the color of the white of the eye, the color of the blood vessel, the color of the iris, the color of the pupil during surgery) so that it can be easily distinguished from the eyeball that is the background image during the detection process. color, etc.) may be used.
  • the shape of the target C1 for instructing the start of surgery may be a shape that facilitates detection processing.
  • the surgical start instruction target C1 may have a shape in which a circular tip is attached to the end of the handle of the surgical tool (for example, a rod-like handle).
  • FIG. 18 and FIG. 19 are diagrams respectively showing an example of the shape of the surgical start instruction target C1 according to the second embodiment (Example 1 and Example 2).
  • the shape of the operation start instruction target C1 may be a shape with a square tip at the end of the handle of the surgical instrument as shown in FIG. 18, or a shape with a star-shaped tip as shown in FIG. It may have a shape, or may have a shape with a tip portion of various shapes such as elliptical or polygonal. As a result, it is possible to improve the accuracy of detection of the surgical start instruction target C1.
  • a target whose shape in the captured image can be accurately predicted (for example, a shape whose dimensions are known in advance) may be used.
  • a surgical tool with a spherical tip attached to the end of the handle is used, the relationship between the size of the image and the actual size can be clarified by comparing the size of the tip on the photographed image and the actual size. become.
  • the actual size of the tip is known in advance. Therefore, the distal end portion of the surgical instrument can be used as a reference for measuring the size of the object to be measured on the subsequent image. This makes it possible to simultaneously obtain an instruction to start surgery and perform size calibration, thereby simplifying the workflow of surgery.
  • the tip of the surgical tool is brought into contact with the corneal vertex when inserting the surgical start instruction target C1 into the image.
  • the separation distance from the imaging device for the surgical field image G1 to the corneal vertex and the separation distance from the imaging device for the surgical field image G1 to the contour line of the tip of the surgical instrument are Since the difference between is equal to the radius of the sphere, by utilizing information on this relationship, it is possible to carry out highly accurate calibration that takes into account the distance from the imaging device.
  • FIG. 20 is a diagram showing an example of the color of the target C1 for instruction to start the operation according to the second embodiment. As shown in FIG. 20, for example, one or more known colors such as green, blue, and red may be given to the target C1 for instructing the start of surgery, so that colors other than white balance may be calibrated. . This makes it possible to obtain an instruction to start surgery and perform color calibration (color calibration) at the same time, thereby simplifying the workflow of surgery.
  • the processing start unit 13g detects a target for instruction to start the operation (for example, target C1 for operation start instruction) based on the operation field image (eg, operation field image G1, etc.) received by the image input unit 13b, and When a surgical start instruction object is detected based on the surgical field image received by the input unit 13b, processing at the time of surgical start is started. As a result, the operator can issue an instruction to start the operation by inserting an object for instructing the start of the operation into the operative field. can be suppressed.
  • a target for instruction to start the operation for example, target C1 for operation start instruction
  • the operation field image e.g, operation field image G1, etc.
  • the processing start unit 13g starts the processing of acquiring the surgical field image at the start of the surgery as the processing at the start of the surgery.
  • the processing start unit 13g starts the processing of acquiring the surgical field image at the start of the surgery as the processing at the start of the surgery.
  • the processing start unit 13g starts the processing at the time of starting the operation when detecting the operation start instruction target inserted into the operation field image from the specific direction based on the operation field image received by the image input unit 13b. .
  • the processing start unit 13g since the insertion direction is known, it is possible to improve the detection accuracy of the surgical start instruction target, so that the complexity of the workflow related to surgery can be reliably suppressed.
  • the above specific direction is above or below the eye in the operative field image received by the image input unit 13b.
  • the processing start unit 13g learns the surgical start instruction target before the surgical operation. As a result, it is possible to improve the detection accuracy of the surgical start instructed target, so that it is possible to reliably suppress the complication of the workflow related to the surgical operation.
  • the processing start unit 13g detects the predetermined shape or color of the target for the operation start instruction. As a result, it is possible to improve the detection accuracy of the surgical start instructed target, so that it is possible to reliably suppress the complication of the workflow related to the surgical operation.
  • the processing start unit 13g uses the detected dimensions of the predetermined shape of the operation start instruction target for calibration when measuring the size of the measurement target in the operation field image. As a result, it is possible to simultaneously perform the acquisition of the surgical start instruction and the calibration related to the size, so that it is possible to reliably suppress the complication of the workflow related to surgery.
  • the processing start unit 13g uses the detected color of the operation start instruction target for adjusting the color balance of the operation field image. As a result, it is possible to simultaneously perform the acquisition of the surgical start instruction and the color calibration (color proofreading), thereby reliably suppressing the complication of the surgical workflow.
  • Example of schematic configuration of computer> The series of processes described above can be executed by hardware or by software.
  • a program that constitutes the software is installed in the computer.
  • the computer includes, for example, a computer built into dedicated hardware and a general-purpose personal computer capable of executing various functions by installing various programs.
  • FIG. 21 is a diagram showing an example of a schematic configuration of a computer 500 that executes the series of processes described above by a program.
  • the computer 500 has a CPU (Central Processing Unit) 510, a ROM (Read Only Memory) 520, and a RAM (Random Access Memory) 530.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 510 , ROM 520 and RAM 530 are interconnected by a bus 540 .
  • An input/output interface 550 is also connected to the bus 540 .
  • An input unit 560 , an output unit 570 , a recording unit 580 , a communication unit 590 and a drive 600 are connected to the input/output interface 550 .
  • the input unit 560 is composed of a keyboard, mouse, microphone, imaging device, and the like.
  • the output unit 570 is configured with a display, a speaker, and the like.
  • the recording unit 580 is composed of a hard disk, a nonvolatile memory, or the like.
  • the communication unit 590 is configured by a network interface or the like.
  • a drive 600 drives a removable recording medium 610 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory.
  • the CPU 510 loads, for example, the program recorded in the recording unit 580 into the RAM 530 via the input/output interface 550 and the bus 540, and executes it. A series of processes are performed.
  • a program executed by the computer 500 that is, the CPU 510 can be provided by being recorded on a removable recording medium 610 such as a package medium, for example. Also, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the recording unit 580 via the input/output interface 550 by loading the removable recording medium 610 into the drive 600 . Also, the program can be received by the communication unit 590 and installed in the recording unit 580 via a wired or wireless transmission medium. In addition, the program can be installed in the ROM 520 or the recording unit 580 in advance.
  • the program executed by the computer 500 may be a program in which processing is performed in chronological order according to the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed in
  • a system means a set of multiple components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a single device housing a plurality of modules in one housing, are both systems. .
  • this technology can take the configuration of cloud computing in which one function is shared by multiple devices via a network and processed jointly.
  • each step described in the flow of processing described above can be executed by a single device, or can be shared and executed by a plurality of devices.
  • one step includes multiple processes
  • the multiple processes included in the one step can be executed by one device or shared by multiple devices.
  • the present technology can also take the following configuration.
  • Image processing device is an image input for receiving an operative field image for the patient's eye; an eye tracking unit that tracks an eyeball in the surgical field image and detects displacement of the eyeball in the surgical field image; a display image generation unit that converts the surgical field image based on the displacement of the eyeball and generates a display image by superimposing a mark on the converted surgical field image; with The display image generation unit generates the display image by positioning the mark at a predetermined position on the display screen
  • a preoperative plan receiver that receives a preoperative image based on a preoperative plan for the eye and location information of the mark; By comparing the preoperative image and the surgical field image at the start of surgery, the position information of the mark is converted according to the surgical field image at the surgery start, and the surgical field image at the surgery start is converted.
  • an information accumulation unit for accumulating the positional information of the mark obtained; further comprising The eyeball tracking unit tracks an eyeball in the real-time surgical field image by comparing the surgical field image at the start of the surgery with the real-time surgical field image, and tracks the eyeball in the real-time surgical field image.
  • the display image generation unit generates the real-time surgical field image based on the displacement information so as to eliminate a positional change of the eyeball with respect to the mark fixed at the predetermined position based on the converted positional information of the mark. to generate the display image;
  • the image processing apparatus according to (1) above.
  • the display image generation unit is generating the display image by juxtaposing the image obtained by superimposing the mark on the converted image of the operating field and the image of the operating field in real time received by the image input unit; The image processing apparatus according to (1) or (2) above.
  • the display image generation unit is generating the display image by superimposing the image obtained by superimposing the mark on the converted surgical field image and the real-time surgical field image received by the image input unit; The image processing apparatus according to (1) or (2) above.
  • the display image generation unit is enlarging or reducing both or one of the image in which the mark is superimposed on the transformed image of the surgical field and the real-time image of the surgical field received by the image input unit; The image processing apparatus according to (3) above.
  • the display image generation unit is generating, as the display image, an image of the corneal region in the converted surgical field image, or an image of a region including the cornea and the corneal periphery in the converted surgical field image; The image processing apparatus according to any one of (1) to (5) above.
  • the display image generation unit is generating the display image by superimposing additional information indicating the movement of the eyeball on the converted surgical field image;
  • the image processing apparatus according to any one of (1) to (6) above.
  • the display image generation unit is When the eyeball is out of tracking of the eyeball by the eyeball tracking unit, maintaining the display image before the eyeball is out of the eyeball.
  • the image processing apparatus according to any one of (1) to (7) above.
  • the display image generation unit is generating the display image in three dimensions; The image processing apparatus according to any one of (1) to (8) above. (10) When an operation start instruction target is detected based on the surgical field image received by the image input unit, and when the operation start instruction target is detected based on the surgical operation field image received by the image input unit, the operation is started.
  • the image processing apparatus according to any one of (1) to (9) above.
  • the processing initiation unit As the process at the start of the operation, start the process of acquiring the image of the surgical field at the start of the operation;
  • the processing initiation unit When the target for instruction to start the operation to be inserted into the image of the operation field from a specific direction is detected based on the image of the operation field received by the image input unit, the processing at the time of starting the operation is started;
  • the image processing apparatus according to (10) or (11) above. (13) wherein the specific direction is above or below the eye in the operative field image received by the image input unit;
  • the image processing device according to (12) above.
  • the processing initiation unit Preoperatively learning the target for instructing the start of the operation, The image processing apparatus according to any one of (10) to (13) above.
  • the processing initiation unit Detecting a predetermined shape or color of the surgical start instruction target; The image processing apparatus according to any one of (10) to (14) above.
  • the processing initiation unit Using the detected dimensions of the predetermined shape of the operation start instruction target for calibration when measuring the size of the measurement target in the surgical field image, The image processing apparatus according to (15) above.
  • an image input for receiving an operative field image for the patient's eye an eye tracking unit that tracks an eyeball in the surgical field image and detects displacement of the eyeball in the surgical field image; setting a plurality of luminance regions having different luminances in the operative field image, converting the operative field image based on the displacement of the eyeball, and superimposing boundaries of the plurality of luminous regions on the converted operative field image to display an image; a display image generator that generates with The display image generation unit generates the display image by positioning the boundary at a predetermined position on the display screen regardless of the conversion of the surgical field image.
  • Image processing device an image input for receiving an operative field image for the patient's eye.
  • the image processing device receiving surgical field images for the patient's eye; tracking an eyeball in the operative field image and detecting displacement of the eyeball in the operative field image; converting the surgical field image based on the displacement of the eyeball, and superimposing a mark on the converted surgical field image to generate a display image; including The image processing device generates the display image by positioning the mark at a predetermined position on the display screen, regardless of the conversion of the operative field image.
  • Image processing method receiving surgical field images for the patient's eye; tracking an eyeball in the operative field image and detecting displacement of the eyeball in the operative field image; converting the surgical field image based on the displacement of the eyeball, and superimposing a mark on the converted surgical field image to generate a display image; including The image processing device generates the display image by positioning the mark at a predetermined position on the display screen, regardless of the conversion of the operative field image.
  • a surgical microscope for obtaining an operative field image of the patient's eye; an image processing device that generates a display image; a display device for displaying the display image; with The image processing device is an image input unit that receives the operative field image; an eye tracking unit that tracks the eyeball in the surgical field image and detects displacement of the eyeball in the surgical field image; a display image generation unit that converts the surgical field image based on the displacement of the eyeball and superimposes a mark on the converted surgical field image to generate the display image; has The display image generation unit generates the display image by positioning the mark at a predetermined position on the display screen regardless of the conversion of the operative field image.
  • Operating microscope system (21) An image processing method using the image processing apparatus according to any one of (1) to (18) above.
  • a surgical microscope system comprising the image processing device according to any one of (1) to (18) above.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Vascular Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Un dispositif de traitement d'images (13) selon un mode de réalisation de la présente divulgation comprend : une unité d'entrée d'image (13d) pour recevoir une image de champ chirurgical d'un œil d'un patient ; une unité d'oculométrie (13e) pour suivre l'œil dans l'image de champ chirurgical et détecter un changement de position de l'œil dans l'image de champ chirurgical ; et une unité de génération d'image d'affichage (13f) pour convertir l'image chirurgicale sur la base du changement de position de l'œil et générer une image d'affichage dans laquelle une marque est chevauchée sur l'image de champ chirurgical convertie. L'unité de génération d'image d'affichage (13f) génère l'image d'affichage par placement de la marque à une position prédéterminée sur l'écran d'affichage indépendamment de la conversion de l'image de champ chirurgical.
PCT/JP2021/046452 2021-01-29 2021-12-16 Dispositif de traitement d'images, procédé de traitement d'images et système de microscope chirurgical Ceased WO2022163188A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021013641 2021-01-29
JP2021-013641 2021-01-29

Publications (1)

Publication Number Publication Date
WO2022163188A1 true WO2022163188A1 (fr) 2022-08-04

Family

ID=82654305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/046452 Ceased WO2022163188A1 (fr) 2021-01-29 2021-12-16 Dispositif de traitement d'images, procédé de traitement d'images et système de microscope chirurgical

Country Status (1)

Country Link
WO (1) WO2022163188A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025215448A1 (fr) * 2024-04-09 2025-10-16 Alcon Inc. Fourniture d'une superposition de profondeur pour un système ophtalmique

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09140671A (ja) * 1995-11-22 1997-06-03 Canon Inc 眼底血流計及び眼底トラッキング装置
JP2006136714A (ja) * 2004-10-26 2006-06-01 Carl Zeiss Surgical Gmbh 手術用顕微鏡検査システム及び眼の手術を行う方法
JP2012030054A (ja) * 2010-07-05 2012-02-16 Canon Inc 眼科装置、眼科システム及び記憶媒体
JP2012506272A (ja) * 2008-10-22 2012-03-15 ゼンゾモトリック インストルメンツ ゲゼルシャフト フュア イノベイティブ ゼンゾリク ミット ベシュレンクテル ハフツング コンピューター支援眼部手術用の画像処理方法および装置
JP2013509273A (ja) * 2009-10-30 2013-03-14 ザ・ジョンズ・ホプキンス・ユニバーシティー 外科的介入のための臨床上重要な解剖学的標識点の視覚的追跡/アノテーション
JP2015033650A (ja) * 2014-11-19 2015-02-19 キヤノン株式会社 眼科装置及び被検眼の動きを測定する方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09140671A (ja) * 1995-11-22 1997-06-03 Canon Inc 眼底血流計及び眼底トラッキング装置
JP2006136714A (ja) * 2004-10-26 2006-06-01 Carl Zeiss Surgical Gmbh 手術用顕微鏡検査システム及び眼の手術を行う方法
JP2012506272A (ja) * 2008-10-22 2012-03-15 ゼンゾモトリック インストルメンツ ゲゼルシャフト フュア イノベイティブ ゼンゾリク ミット ベシュレンクテル ハフツング コンピューター支援眼部手術用の画像処理方法および装置
JP2013509273A (ja) * 2009-10-30 2013-03-14 ザ・ジョンズ・ホプキンス・ユニバーシティー 外科的介入のための臨床上重要な解剖学的標識点の視覚的追跡/アノテーション
JP2012030054A (ja) * 2010-07-05 2012-02-16 Canon Inc 眼科装置、眼科システム及び記憶媒体
JP2015033650A (ja) * 2014-11-19 2015-02-19 キヤノン株式会社 眼科装置及び被検眼の動きを測定する方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025215448A1 (fr) * 2024-04-09 2025-10-16 Alcon Inc. Fourniture d'une superposition de profondeur pour un système ophtalmique

Similar Documents

Publication Publication Date Title
JP5511516B2 (ja) 眼科装置
US9414961B2 (en) Real-time surgical reference indicium apparatus and methods for astigmatism correction
US20130018276A1 (en) Tools and methods for the surgical placement of intraocular implants
WO2017169823A1 (fr) Dispositif et procédé de traitement d'image, système de chirurgie et élément chirurgical
US11698535B2 (en) Systems and methods for superimposing virtual image on real-time image
US9138138B2 (en) Ophthalmic apparatus and recording medium having ophthalmic program stored therein
JP2014507234A (ja) 視力矯正処置において使用するための波面データの測定/表示/記録/再生
EP2755548A1 (fr) Détermination de l'orientation azimutale de l' oeil d'un patient
US20230397811A1 (en) Ophthalmic observation apparatus, method of controlling the same, and recording medium
JP5570673B2 (ja) 眼科装置
WO2022163188A1 (fr) Dispositif de traitement d'images, procédé de traitement d'images et système de microscope chirurgical
US12465439B2 (en) Image processing device, image processing method, and surgical microscope system
US20240045497A1 (en) Image processing apparatus, image processing method, and operation microscope system
WO2022163189A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et système de microscope chirurgical
US20240033035A1 (en) Image processing device, image processing method, and surgical microscope system
US20250139795A1 (en) Image processing device, image processing method, and surgical microscope system
JP7166080B2 (ja) 眼科装置
JP7535785B2 (ja) 眼科装置
CN116744838A (zh) 图像处理装置、图像处理方法和手术显微镜系统
JP2022116559A (ja) 画像処理装置、画像処理方法及び手術顕微鏡システム
JP2018117788A (ja) 眼科装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21923192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21923192

Country of ref document: EP

Kind code of ref document: A1