[go: up one dir, main page]

WO2025088983A1 - Image display device and image display control program - Google Patents

Image display device and image display control program Download PDF

Info

Publication number
WO2025088983A1
WO2025088983A1 PCT/JP2024/034939 JP2024034939W WO2025088983A1 WO 2025088983 A1 WO2025088983 A1 WO 2025088983A1 JP 2024034939 W JP2024034939 W JP 2024034939W WO 2025088983 A1 WO2025088983 A1 WO 2025088983A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
marker
region
interest
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/034939
Other languages
French (fr)
Japanese (ja)
Inventor
健司 鈴木
敏彦 落合
弘 有馬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juntendo Educational Foundation
Original Assignee
Juntendo Educational Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juntendo Educational Foundation filed Critical Juntendo Educational Foundation
Priority to JP2025553033A priority Critical patent/JPWO2025088983A1/ja
Publication of WO2025088983A1 publication Critical patent/WO2025088983A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/58Testing, adjusting or calibrating thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an image display device and an image display control program, and more specifically to an image display device and an image display control program for displaying an image of a region of interest.
  • thoracic drainage is a procedure in which a catheter is inserted into the patient's body and blood, pus, and air that has accumulated in the thoracic cavity is drained out of the body via the catheter.
  • ultrasound devices When performing thoracic drainage, there is currently no established technology for accurately determining the position of organs and safely inserting a catheter into a patient's body.
  • an ultrasound device is used to confirm the position of the organs inside the body and the catheter is inserted while avoiding the organs.
  • ultrasound devices cannot be used for thoracic drainage because ultrasound waves are reflected by the air in the thoracic cavity. To begin with, ultrasound devices are very difficult to use, and it is very difficult for the doctor in charge of inserting the catheter to perform various procedures while operating the ultrasound device.
  • a CT scanner is used to obtain and analyze a chest CT image as an image of the region of interest in advance, and doctors refer to this CT image beforehand when performing thoracic drainage.
  • a chest CT image is an image of a cross-section of the body, and understanding the three-dimensional position of organs must depend on the doctor's experience and intuition.
  • the method of obtaining and analyzing chest CT images in advance is not sufficient to accurately determine the position of the patient's organs, and the current situation is that the insertion of a catheter into the correct position during thoracic drainage depends on the doctor's level of skill.
  • the above are problems with thoracic drainage, but similar problems can occur in other treatments. In other words, when performing various treatments, it is difficult to determine where in the patient's body the treatment tool should be inserted because it is not easy to understand the state of the tissues and organs inside the patient's body.
  • the present invention was made in consideration of these problems, and provides an image display device and an image display control program that make it possible to quickly align an image of a region of interest with an image of a real patient with a small amount of calculations.
  • the image display device comprises an image acquisition unit that acquires an area of interest image obtained by attaching a marker to the surface of a patient's body and capturing an image of the patient's area of interest together with the marker, an image display unit provided in the user's field of view that includes a gaze guidance unit that guides the user's gaze to one of the markers, a marker identification unit that identifies the marker in the area corresponding to the user's gaze guided by the gaze guidance unit, and a conversion unit that performs a conversion on the area of interest image to align the marker in the area of interest image with the marker identified by the marker identification unit, and the image display unit displays the area of interest image converted by the conversion unit.
  • the image display control program of the present invention is characterized in that it is configured to enable a computer to execute the steps of: attaching a marker to the surface of a patient's body and capturing an image of the patient's region of interest together with the marker to obtain an image of the region of interest; displaying the image of the region of interest together with the patient to which the marker is attached on an image display unit; guiding the user's line of sight to one of the markers on the image display unit; identifying the marker in the region of interest corresponding to the guided line of sight of the user; performing a transformation on the image of the region of interest to align the marker in the image of the region of interest with the identified marker; and displaying the image of the region of interest after the transformation on the image display unit.
  • the present invention provides an image display device and an image display control program that enable rapid alignment of an image of a region of interest with an image of a real patient with a small amount of calculation.
  • FIG. 1 is a schematic diagram illustrating an overall configuration of a medical procedure support system 1 including an image display device according to a first embodiment.
  • FIG. 2 is a block diagram illustrating an example of a hardware configuration of an image server 20.
  • a block diagram illustrating an example of the configuration of the AR glasses 30. 13 is a conceptual diagram for explaining calculation of coordinates Xm1' to Xm3' of a marker MK in a CT image Pcc and coordinates Xm1 to Xm3 of the marker MK in the AR glasses 30.
  • FIG. 1 is a schematic diagram showing an example of how to attach a marker MK to the body surface of a patient PT.
  • FIG. 10 is a flowchart showing a procedure for capturing a CT image Pcc using the CT device 10 and calculating the coordinates of a marker MK.
  • 13 is a flowchart showing a procedure for superimposing and displaying a CT image Pcc and an actual image Ppt of the patient PT through the AR glasses 30. 13 shows an example of a screen display when guiding the line of sight to a marker MK.
  • FIG. 11 is a schematic diagram illustrating a second embodiment.
  • FIG. 11 is a schematic diagram illustrating a second embodiment.
  • 10 is a flowchart illustrating an operation of the system 1 according to the second embodiment.
  • 10 is a flowchart illustrating an operation of the system 1 according to the second embodiment. 13 shows a modification of the second embodiment.
  • FIG. 11 is a schematic diagram illustrating a second embodiment.
  • 10 is a flowchart illustrating an operation of the system 1 according to the second embodiment.
  • 10 is a flowchart illustrating an operation of the system
  • FIG. 13 is a block diagram illustrating a medical procedure support system 1 according to a third embodiment.
  • FIG. 13 is a schematic diagram illustrating a configuration of a medical procedure support system 1 according to a fourth embodiment.
  • FIG. 13 is a schematic diagram illustrating a configuration of a medical procedure support system 1 according to a fifth embodiment.
  • FIG. 13 is a schematic diagram illustrating a configuration of a medical procedure support system 1 according to a sixth embodiment.
  • 13 is a schematic diagram showing an example of how to attach a marker MK to the body surface of a patient PT.
  • FIG. 13 is a diagram for explaining determination of a degree of match of a patient.
  • FIG. 13 is a diagram for explaining determination of a degree of match of a patient.
  • a medical procedure support system 1 including an image display device will be described with reference to Fig. 1.
  • the medical procedure support system 1 illustrates a system for supporting the execution of thoracic drainage, but the present invention is not limited to this, and a similar configuration can also be adopted in systems for supporting other procedures, such as central venous pressure measurement (CVP) and epidural anesthesia.
  • CVP central venous pressure measurement
  • the system in FIG. 1 may, as an example, comprise a CT device 10 as an image capturing device, an image server 20, AR glasses 30 as an image display device, and a display 40.
  • a patient PT undergoing thoracic drainage first acquires a CT image Pcc (Pcc0) of the area of interest near the thoracic cavity in the CT device 10.
  • the CT image Pcc is displayed on the AR glasses 30 when thoracic drainage is performed.
  • an image that includes the area of interest and the surrounding organs, etc., such as this CT image Pcc (Pcc0) may be collectively referred to as a "area of interest image.”
  • the patient PT has markers MK affixed to their body surface when imaging with the CT device 10 and when performing thoracic drainage.
  • the markers MK are used to align the CT image Pcc with the actual image of the patient PT in the AR glasses 30.
  • at least three markers MK1 to 3 are affixed to the patient's body surface to surround the area of interest.
  • At least three markers MK should be affixed to one patient PT, but it is possible to use more than three markers to achieve more accurate alignment.
  • the patient PT moves to the thoracic drainage treatment room and receives thoracic drainage from the doctor DR.
  • the marker MK remains attached in the same position as during CT imaging in order to align the CT image Pcc as described below.
  • the doctor DR in charge of the thoracic drainage wears AR glasses 30 and performs thoracic drainage by observing the actual image Ppt of the patient seen through the AR glasses 30 and the CT image Pcc superimposed on it. When superimposed, the CT image Pcc is subject to a specified transformation using a method described below.
  • the doctor DR holds the catheter 60 for thoracic drainage and inserts it into the appropriate location within the thoracic cavity based on the CT image Pcc and the patient's actual image Ppt.
  • the doctor DR can also refer to the CT image Pcc displayed on the display 40 as an auxiliary.
  • the doctor DR checks the position of the rib cage and deflated lungs in the CT image Pcc superimposed on the AR glasses 30, and recognizes the dead space within the thoracic cavity.
  • the doctor DR then inserts the catheter 60 between the ribs.
  • the doctor DR proceeds with the insertion of the catheter 60 while checking the CT image Pcc displayed on the AR glasses 30 to make sure that the insertion site is in the correct position and angle.
  • the CT device 10 is a device for capturing a CT image Pcc of the chest of the patient PT.
  • the CT image Pcc is used to grasp the position and condition of the organs around the area of interest of the patient PT when performing thoracic drainage.
  • the CT device 10 is an example of an imaging device that captures images for grasping the position and condition of the patient's organs, and is not limited to this. Imaging devices other than the CT device 10, such as an MRI (Magnetic Resonance Imaging) device, may be used as long as they can obtain images for grasping the position and condition of the patient's organs.
  • MRI Magnetic Resonance Imaging
  • the CT device 10 may be shared among multiple medical treatment support systems 1.
  • the medical treatment support system 1 does not need to have its own CT device 10, and it is sufficient if it is configured to be able to acquire CT images taken by a CT device 10 located at an external institution, for example.
  • the CT device 10 and the image server 20 may be connected by a LAN (Local Area Network), a WAN (Wide Area Network), the Internet, a dedicated line, etc.
  • the image server 20 and the AR glasses 30 do not need to be located in the same organization (hospital, etc.), and a system may be set up such that an organization managing the AR glasses 30 can access an image server 20 located in another organization as appropriate.
  • the image server 20 is a computer that manages the captured CT images Pcc and executes image processing for three-dimensional display in the AR glasses 30.
  • the AR glasses 30 superimpose and display the CT images Pcc obtained from the image server 20 together with the actual image Ppt of the patient PT.
  • the image server 20 may include a CPU (Central Processing Unit) 21 as an arithmetic and control device, an input/output interface 22, a RAM (Random Access Memory) 23, a ROM (Read Only Memory) 24, a storage device 25, a 3D modeling engine 26, and a 3D model display engine 27.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • the CPU 21 is a calculation device that handles various calculations related to various operations of the image server 20.
  • a GPU Graphic Processing Unit
  • the input/output interface 22 is an interface device that handles input and output of data and signals between external devices including the CT device 10.
  • the RAM 23 has a function of temporarily storing various calculation data and the like during execution of the image display control program stored in the image server 20.
  • the ROM 24 stores the BIOS and firmware of peripheral devices.
  • the storage device 25 is a storage device that stores the above-mentioned image display control program as well as the CT image Pcc captured by the CT device 10, and is, for example, a hard disk drive device or a solid state drive device.
  • the RAM 23, ROM 24, and storage device 25 are examples of storage devices, and it goes without saying that other storage device configurations can be adopted.
  • the 3D modeling engine 26 is a calculation unit that converts the captured CT image Pcc0 (two-dimensional image) into a three-dimensional CT image Pcc.
  • the 3D model display engine 27 is a calculation unit that converts the three-dimensional CT image Pcc into a data format suitable for display in the AR glasses 30.
  • the 3D modeling engine 26 and the 3D model display engine may be realized by the CPU 21 and a program, or may be realized by an image processing processor or GPU separate from the CPU 21.
  • the AR glasses 30 may include a display unit 31, a light guide unit 32, a speaker 33, a microphone 34, a sensor 35, a wireless communication unit 36, a visible light/infrared camera 37, and an AR glasses control unit 38.
  • the AR glasses 30 also store an image display control program for controlling the image display.
  • the AR glasses control unit 38 further includes a CPU 381, a RAM 382, and a ROM 383.
  • the display unit 31 is a display device that displays the three-dimensional image data Pcc received from the image server 20 via the wireless communication unit 36, and is, for example, a microdisplay.
  • the wireless communication unit 36 and the RAM 382 (memory) function as an image acquisition unit that acquires CT images in the AR glasses 30.
  • the CT image acquired by the image acquisition unit is displayed on the display unit 31 as an image display unit.
  • the light guide unit 32 is a member that guides the light emitted by the display unit 31 to the user's eyes and displays the image of the image data Pcc in front of the user's eyes.
  • the light guide unit 32 can be configured by a holographic lens that is arranged in front of the user's eyes, has translucency, and is capable of guiding light from the display unit 31. It goes without saying that the configuration of the display unit 31 and the light guide unit 32 can adopt a different configuration as long as the image data Pcc can be displayed superimposed on the actual image Ppt of the patient PT.
  • the display unit 31 and the light guide unit 32 can be replaced by a laser scanner or the like.
  • the speaker 33 is a device that emits sound based on audio data received from the image server 20 or an external device to communicate information to the user (doctor DR) by voice.
  • the microphone 34 is a device that detects the voice (commands, etc.) emitted by the user and converts it into audio data.
  • the audio data output by the microphone 34 is also transmitted to the image server 20 as appropriate, and can be collected and stored in the image server 20.
  • the sensor 35 is a sensor for detecting the movement, rotation, tilt, etc. of the AR glasses 30 and for detecting the depth of an object in front of the AR glasses 30, and includes, for example, an accelerometer, a gyro, a magnetometer, an infrared detection device, a depth sensor, etc.
  • the visible light/infrared camera 37 is an imaging device for performing the following operations, for example. - Determine the spatial structure around the AR glasses 30 - Grasp the position Xg of the AR glasses 30 in the determined spatial structure - Detect the user's line of sight - Capture an image of the object in front of the user's eyes - Grasp the position Xm of the marker MK
  • the type, number, placement, etc. of the cameras can be changed in various ways depending on the purpose and specifications.
  • the marker identification unit 391 has a function of executing image processing to identify the marker MK affixed to the body surface of the patient PT in the view presented to the user by the AR glasses 30.
  • the coordinate calculation unit 392 calculates the coordinates (first coordinates) of the marker MK included in the CT image Pcc and the coordinates (second coordinates) of the marker MK identified by the marker identification unit 391. As shown in FIG. 4, the coordinates Xm1' to Xm3' of the marker MK in the CT image Pcc are calculated as coordinates of a coordinate system Cx1 set based on a reference point in the CT device 10.
  • the coordinates Xm1 to Xm3 of the marker MK identified by the marker identification unit 391 in the AR glasses 30 can be calculated as coordinates of a coordinate system Cx2, for example, based on the position Xg of the AR glasses 30.
  • the marker identification unit 391 performs the marker identification operation only in a partial area that corresponds to the line of sight guided by the line of sight guidance unit 393, which will be described later.
  • the gaze guidance unit 393 has the function of guiding the gaze of the user of the AR glasses 30 to the marker MK. As described below, the gaze guidance unit 393 guides the user to direct the gaze to one of the markers MK, and when the result of the guidance is that the gaze is directed to the marker MK, the marker identification unit 391 performs a marker identification operation in a portion of the area corresponding to the direction of the gaze. This limits the area in which the marker identification operation is performed, making it possible to reduce the amount of calculation.
  • the instrument tip identification unit 394 is a part for identifying the tip of a treatment tool such as a catheter.
  • the conversion unit 395 compares the coordinates Xm1' to Xm3' calculated by the coordinate calculation unit 392 with the coordinates Xm1 to Xm3, and performs a conversion to make the coordinates Xm1' to Xm3' match the coordinates Xm1 to Xm. By performing such a transformation, it becomes possible to accurately match the CT image Pcc to the actual image of the patient PT observed through the AR glasses 30, thereby assisting in performing accurate thoracic drainage.
  • the calculation in the transformation unit 395 may be a linear transformation such as an affine transformation, but the nonlinear transformation algorithm may also be a nonlinear transformation using, for example, the B-spline method.
  • FIG. 5 shows an example of how to attach the markers MK to the body surface of the patient PT.
  • a set of multiple markers for example three markers (MK1 to MK3), is used.
  • the markers MK1 to MK3 are attached to the body surface of the patient PT so as to surround the area of interest DA.
  • Each of the markers MK1 to MK3 can be about 2 cm in diameter. They can be made of white alumina, for example. Any method can be used to fix them to the human body, but they can be fixed using medical tape (double-sided or single-sided).
  • the markers MK1 to MK3 are preferably made of a material that has a lower X-ray transmittance than the human body.
  • the markers MK1 to MK3 may be the same shape and color, but it is preferable to use different colors (red, blue, yellow, etc.), shapes (circle, square, triangle), brightness, etc. to make them easier to distinguish.
  • markers MK1 to MK3 can be attached at the midpoint of each clavicle, with one marker MK1 and one marker MK2 attached to the hypochondrium (septum plexus).
  • These points can be considered "fixed points” whose movement due to arm movement, breathing, etc. is smaller than that of the reference point, and whose movement in at least the horizontal direction is small and negligible.
  • the reference point can be set at a position where the amount of movement due to arm movement, breathing, etc. is approximately average.
  • Figure 6 is a flowchart showing the procedure for taking a CT image Pcc using the CT device 10 and calculating the coordinates of the marker MK.
  • Figure 7 is a flowchart showing the procedure for superimposing the CT image Pcc and the actual image Ppt of the patient PT on the AR glasses 30.
  • Figure 8 shows an example of a screen display when guiding the line of sight to the marker MK.
  • a marker MK is attached to the chest of the patient PT (step S11), and then a CT image Pcc0 of the patient's chest is taken with the CT device 10 (step S12).
  • the CT image Pcc0 unlike typical CT scans, it is preferable for the patient PT to have both arms lowered toward the lower body during the scan. This is because subsequent thoracic drainage will be performed with both arms lowered, and it is therefore preferable to perform the CT scan in the same posture as that at that time.
  • the captured CT image Pcc0 is transferred (acquired) to the image server 20, where an intermediate file (Pcc') processed as voxel data is generated and saved (step S13).
  • the voxel data is also transferred to the AR glasses 30.
  • the coordinate calculation unit 392 in the AR glasses 30 identifies the positions of the three markers MK1 to MK3 based on this voxel data, and calculates their coordinates Xm1', Xm2', and Xm3' (step S14). Note that step S14 may be performed in parallel with the steps in the flowchart of FIG. 7 below.
  • the voxel data is volumetric data in which the CT value is entered in each lattice of a 3D cubic lattice space.
  • the data of each lattice is called the voxel value.
  • the coordinate Xmi of the marker MK can be obtained by extracting a voxel having a specific CT value for identifying the marker MK.
  • the patient PT moves to the treatment room to perform chest drainage (the marker MK remains attached to the chest), and the doctor DR activates the AR glasses 30 (step S21) and puts them on his or her head.
  • the AR glasses 30 grasp the spatial structure S(x) around the AR glasses 30 according to the output of the various sensors 35 and the visible light/infrared camera 37, identify the position of the AR glasses 30, and calculate the coordinate Xg of that position (step S22).
  • the line of sight guidance unit 393 of the AR glasses 30 displays a screen as shown in FIG. 8 for the doctor DR, who is the user, via the display unit 31, and sequentially guides the doctor DR's line of sight to one of the markers MK1 to MK3 (step S23). Specifically, an instruction OR1 saying "Please look at the marker in the upper left” is displayed in a part of the screen (for example, the lower left), and a line of sight display mark GDM indicating the direction of the doctor DR's line of sight detected according to the output of the visible light/infrared camera 37 and an area display mark CM indicating the area centered on the line of sight are displayed.
  • the doctor DR determines that the target marker MK1 is included near the center of the area display mark CM, he or she presses a confirmation button (not shown) to confirm the position of the area display mark CM. After that, the same operation is repeated for the markers MK2 and MK3, and the position of the area display mark CM including the markers MK1 to MK3 is identified (step S24).
  • the confirmation of the area display mark may be performed by, for example, issuing a voice commanding confirmation to the microphone 34, in addition to using the confirmation button. Also, if it is determined that the gaze is fixed for a predetermined period of time or more (for example, 3 seconds or more), the gaze direction may be determined to be the position of the area display mark CM.
  • the marker identification unit 391 When the position of the area display mark CM is specified, the marker identification unit 391 is activated, and the position of the marker MK included in the area display mark CM is specified (step S25). Since the area of the area display mark CM is very small compared to the area of the entire view of the AR glasses 30, the amount of calculation can be suppressed.
  • the coordinate calculation unit 292 calculates the coordinate Xm (Xm1 to 3) of the marker MK (step S26). Steps S23 to S26 are repeated as many times as the number of markers MK.
  • the voxel data expressed in the coordinate system Cx1 is converted to an expression in the coordinate system Cx2 by affine transformation or the like (step S28).
  • the voxel data expressed in the coordinate system Cx2 it is assumed that light is emitted from the coordinate Xg, and the voxel values of the lattice through which the light passes are appropriately handled (volume ray casting).
  • This results in a CT image Pcc which is a three-dimensional image displayed on the AR glasses 30.
  • the coordinate Xg is defined for each of the left and right eyes of the AR glasses, and it is possible to display images obtained by emitting light from left and right positions, respectively. This results in an image with parallax, making stereoscopic vision possible.
  • the obtained three-dimensional image, CT image Pcc is transmitted to the display unit 31 and displayed on the AR glasses 30 in a superimposed manner together with the actual image Ppt of the patient PT.
  • the program for the AR glasses 30 is being executed on the image server 20
  • the three-dimensional image data of the CT image is transmitted to the display unit 31 via the wireless communication unit 36, for example by wireless communication.
  • the transmission rate is about 30 frames/second, and when the user (doctor DR) fixes his/her gaze near the area of interest DA as described above, the image can be displayed as being sufficiently smooth.
  • the transmission rate increases to 90 frames/second, the image can be displayed smoothly even when the user shifts his/her gaze.
  • the gaze guidance unit 393 guides the gaze of the user (doctor DR) in the direction of the marker MK, a gaze display mark CM is set in the gaze direction, and the marker MK is recognized (searched) within the gaze display mark CM. This narrows the spatial range to be searched, and allows the marker MK to be recognized quickly (for example, about 50 to 100 times faster than when searching the first half). This operation can be performed sequentially for the three markers.
  • the line of sight is guided by the line of sight guidance unit 393, and then the markers MK are identified in a limited area. This makes it possible to quickly align the CT image with a small amount of calculation.
  • a medical treatment support system 1 according to a second embodiment will be described with reference to FIG. 9 to FIG. 13.
  • the overall configuration of the medical treatment support system 1 according to the second embodiment, and the configurations of the image server 20 and the AR glasses 30 may be the same as those of the first embodiment (FIG. 1 to FIG. 3), so duplicated descriptions will be omitted.
  • the system 1 according to the second embodiment is different from the first embodiment in that it is configured to determine the breathing state of the patient PT and display a CT image according to that state. Specifically, as shown in FIG.
  • a marker MK4 (a marker for respiration detection) arranged at a fluctuating point that fluctuates due to breathing is used.
  • the movement of this marker MK4 is detected, and a CT image according to the movement is displayed on the AR glasses 30.
  • the "fluctuating point” is, for example, near the patient's nipple (in the case of a man). If chest breathing is strong, this position is a suitable attachment position (fluctuation point) for the marker MK4.
  • CT images in the system 1 of the second embodiment will be described with reference to FIG. 10.
  • the coordinates of the marker MK4 are obtained and CT images Pcc10 to Pcc30 are acquired.
  • a deviation of up to about 1 cm can occur in the planar direction between the state in which breath is inhaled (inhalation state) and the state in which breath is exhaled (exhalation state).
  • a deviation can occur between the CT images Pcc10 to Pcc30.
  • multiple CT images Pcc10 to Pcc30 are acquired taking respiration into consideration, and are stored in association with the coordinate values of the markers MK.
  • the procedure shown in FIG. 12 is basically the same as the procedure shown in FIG. 7, but differs from the first embodiment in that in steps S30 and S31, the position of the marker MK4 is identified in the view of the AR glasses 30, and the CT image corresponding to that position is switched (the CT image displayed is changed).
  • CT images are taken for the intermediate state and are subject to switching display, but alternatively, as shown in FIG. 13, CT images can be taken for the exhalation and inhalation states, and the CT image of the intermediate state can be generated by interpolation processing based on the CT images of the exhalation and inhalation states.
  • the system of the second embodiment not only provides the same effects as the first embodiment, but also allows the display of CT images that correspond to the respiratory state of the patient PT, making it easier for the doctor DR to carry out medical procedures.
  • a medical treatment support system 1 according to a third embodiment will be described with reference to Fig. 14.
  • the overall configuration of the medical treatment support system 1 according to the third embodiment may be the same as that of the first embodiment (Fig. 1), and therefore a duplicated description will be omitted.
  • the 3D modeling engine 26 and the 3D model display engine 27 are omitted from the image server 20, and instead, a 3D modeling engine 395 and a 3D model display engine 396 are provided in the AR glasses 30.
  • the function of 3D modeling of CT images is transferred to the AR glasses 30, and the basic operation is the same as that of the first embodiment.
  • the configuration of the medical procedure support system 1 of the fourth embodiment will be described with reference to Fig. 15.
  • the overall configuration of the system 1 is the same as that of the first embodiment, and the configurations of the image server 20 and the AR glasses 30 may also be the same, so duplicated descriptions will be omitted.
  • the system 1 of the fourth embodiment differs from the first embodiment in that, in addition to the AR glasses 30, AR glasses 30T are provided that can observe images similar to those of the AR glasses 30 and are worn by the instructor DRT of the doctor DR.
  • the instructor DRT is in the vicinity of the doctor DR who is to be trained, and wears AR glasses 30T.
  • the AR glasses 30T are capable of observing the actual image Ppt of the patient PT, and like the AR glasses 30, are capable of superimposing and displaying a CT image Pcc.
  • the instructor DRT can provide guidance to the doctor DR by looking at the display on the AR glasses 30T.
  • the AR glasses 30 and AR glasses 30T may exchange data directly via wireless communication, or may send and receive data indirectly via the image server 20.
  • Fig. 16 shows the overall configuration of the medical treatment support system 1 according to the fifth embodiment.
  • the CT device 10 is not shown.
  • the medical procedure support system 1 of the fifth embodiment supports chest drainage by remote operation, where the target patient PT is in a hospital HP1, and the doctor DR in charge of the chest drainage is in a hospital HP2 that is geographically separated from the hospital HP1.
  • the hospitals HP1, HP2, and the image server 20 are connected by a network NW.
  • a patient PT is present in hospital HP1, and is affixed with the same marker MK as in the previously described embodiment.
  • a robot 70 that is responsible for inserting a chest drainage catheter 60, and an actual image of the patient PT is captured by a camera 80.
  • the image captured by the camera 80 is sent to the AR glasses 30 via a network NW.
  • the robot 70 is operated by a controller 90 carried by a doctor DR in hospital HP2.
  • the AR glasses 30 display the actual image Ppt of the patient PT sent from the camera 80 superimposed on the CT image Pcc, and are capable of performing the same alignment and conversion as in the previously described embodiment.
  • the effects of the previous embodiments can also be obtained in medical procedures such as remote-controlled thoracic drainage.
  • the procedure performed by the doctor DR is thoracic drainage, but as described above, the present invention is not limited to thoracic drainage and can be applied to other procedures.
  • a similar system can be used for injections into the veins of the arm or the arteries of the wrist.
  • At least three markers MK can be attached around the veins of the forearm or the arteries of the wrist to take CT images, and the CT images can be displayed on the AR glasses 30 in the same manner.
  • At least three markers MK are attached around the spine on the back, and a CT scan is similarly performed (chest CT scan is performed in the lateral position), which can then be used during epidural anesthesia.
  • the CT image displayed on the AR glasses 30 identifies the spinal cord and dura mater, and epidural anesthesia can be performed by inserting a needle into the epidural space.
  • the present invention is not limited to the above-mentioned embodiment, and various modified examples are included.
  • the above-mentioned embodiment has been described in detail to explain the present invention in an easy-to-understand manner, and is not necessarily limited to those having all the configurations described.
  • image processing may be performed such as lowering the brightness of the CT image of the part where the catheter 60 exists, or increasing the transmittance of the catheter 60 relative to the actual image of the patient.
  • a configuration may be adopted in which the transmittance of the CT image is changed by voice operation via the microphone 34 of the AR glasses 30.
  • a configuration may be adopted in which the transmittance of the catheter 60 in the CT image can be changed according to the doctor's preference, not automatically.
  • a medical procedure support system 1 according to a sixth embodiment will be described with reference to Figs. 17 to 20.
  • the overall configuration of the medical procedure support system 1 according to the sixth embodiment and the configuration of the image server 20 may be the same as those of the first embodiment (Figs. 1 and 2), and therefore a duplicated description will be omitted.
  • the system 1 according to the sixth embodiment differs from the first embodiment in that, as shown in Fig. 17, when an image display control program stored in the AR glasses 30 is executed, the AR glasses 30 also function as a determination unit 397. The determination unit 397 will be described later.
  • the system 1 of the sixth embodiment differs from the first embodiment in that in addition to the three markers MK1, MK2, and MK3, a marker MK4 is used to determine the degree of match between the patient whose CT image Pcc was taken and the patient when a procedure such as thoracic drainage is performed using the AR glasses 30.
  • Marker MK4 is used to identify the patient and is affixed to a position where the amount of change in position due to the patient's breathing is below a reference value.
  • a position where the amount of change in position due to the patient's breathing is below a reference value is a position that can be considered a fixed point as described in the first embodiment.
  • marker MK4 is affixed near the manubrium, but the position at which marker MK4 is affixed is not limited to near the manubrium.
  • the CT device 10 captures an image of the region of interest DA of the patient PT together with four markers MK1 to MK4, and obtains a CT image Pcc as an image of the region of interest.
  • the patient PT moves to the treatment room where the doctor DR performs treatment such as thoracic drainage.
  • the patient PT also has markers MK1 to MK4 attached to their body surface when the doctor DR performs treatment such as thoracic drainage.
  • the doctor in charge of the treatment wears the AR glasses 30 and performs the treatment while observing the actual image Ppt of the patient seen through the AR glasses 30 and the CT image Pcc superimposed on it.
  • the AR glasses 30 When the image display control program stored in the AR glasses 30 is executed, the AR glasses 30 function as a marker identification unit 391, a coordinate calculation unit 392, a gaze guidance unit 393, an instrument tip identification unit 394, and a conversion unit 395, as described in the first embodiment. In this embodiment, the AR glasses 30 also function as a determination unit 397. Note that the processing of the determination unit 397 is executed, for example, between step S26 and step S27 in the flowchart of FIG. 7.
  • the determination unit 397 determines the degree of agreement between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured, based on the positions Xm1-Xm4 of the markers MK1-MK4 identified by the marker identification unit 391 and the positions Xm1'-Xm4' of the markers MK1-MK4 in the CT image Pcc as the region of interest image.
  • the determination unit 397 determines the degree of correspondence between the patient displayed on the AR glasses 30 and the patient from whom the region of interest image was captured, based on the distance between marker MK4 as a specific marker identified from among the markers identified by the marker identification unit 391 and the other markers MK1-MK3, and the distance between marker MK4' in the CT image Pcc as the region of interest image and the other markers MK1'-MK3'.
  • the determination unit 397 calculates distances d14, d24, and d34 from the position Xm4 of the marker MK4 to the positions Xm1 to Xm3 of the other markers MK1 to MK3. Similarly, it calculates distances d14', d24', and d34' from the position Xm4' of the marker MK4' in the CT image Pcc as the region of interest image to the positions Xm1' to Xm3' of the other markers MK1' to MK3'.
  • the determination unit 397 calculates the average value da of the distances d14, d24, and d34, and also calculates the average value da' of the distances d14', d24', and d34', and calculates the difference ⁇ d (absolute value) between the average value da and the average value da'. If the difference ⁇ d is equal to or less than a predetermined threshold, the determination unit 397 determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured match. On the other hand, if the difference ⁇ d is greater than the predetermined threshold, it determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured do not match. In this case, a warning message indicating that the patients are different is displayed on the display unit 31 of the AR glasses 30.
  • the degree of agreement may be calculated based on the difference ⁇ d, and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30.
  • the degree of agreement is expressed as a percentage, for example, and can be calculated using a formula in which the degree of agreement is 100% when the difference ⁇ d is 0, and the degree of agreement decreases as the difference ⁇ increases.
  • the determination unit 397 may also determine whether the patient displayed on the AR glasses 30 matches the patient from whom the region of interest image was captured by determining whether the sum of the squares of the differences between the corresponding distances is less than or equal to a predetermined threshold.
  • the determination unit 397 calculates the difference ⁇ d14 between the distances d14 and d14', the difference ⁇ d24 between the distances d24 and d24', and the difference ⁇ d34 between the distances d34 and d34'.
  • the square sum of the difference ⁇ d14, the difference ⁇ d24, and the difference ⁇ d34 is calculated. If the calculated square sum is equal to or less than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured match. On the other hand, if the calculated square sum is greater than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured do not match.
  • the degree of agreement may be calculated based on the calculated square sum, and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30. The degree of agreement is expressed, for example, as a percentage, and can be calculated using a formula in which the degree of agreement is 100% when the calculated square sum is 0, and the degree of agreement decreases as the calculated square sum increases.
  • the determination unit 397 may also determine the degree of correspondence between the patient displayed on the AR glasses 30 and the patient from whom the region of interest image was captured, based on the angle formed by the line segments connecting marker MK4 as a specific marker identified from among the markers identified by the marker identification unit 391 to the other markers MK1 to MK3, and the angle formed by the line segments connecting marker MK4' in the CT image Pcc as the region of interest image to the other markers MK1' to MK3'.
  • the determination unit 397 calculates the angle ⁇ 12 between a line segment L14 connecting the position Xm4 of the marker MK4 identified by the marker identification unit 391 to the position Xm1 of the marker MK1, and a line segment L24 connecting the position Xm4 of the marker MK4 to the position Xm2 of the marker MK2. It also calculates the angle ⁇ 13 between the line segment L14 and a line segment L34 connecting the position Xm4 of the marker MK4 to the position Xm3 of the marker MK3. It also calculates the angle ⁇ 23 between the line segment L34 and a line segment L24 connecting the position Xm4 of the marker MK4 to the position Xm2 of the marker MK2.
  • the determination unit 397 calculates the angle ⁇ 12' between the line segment L14' connecting the position Xm4' of the marker MK4' in the CT image Pcc to the position Xm1' of the marker MK1 in the CT image Pcc, and the line segment L24' connecting the position Xm4' of the marker MK4 to the position Xm2' of the marker MK2' in the CT image Pcc. It also calculates the angle ⁇ 13' between the line segment L14' and the line segment L34' connecting the position Xm4' of the marker MK4' to the position Xm3' of the marker MK3' in the CT image Pcc. It also calculates the angle ⁇ 23' between the line segment L34' and the line segment L24' connecting the position Xm4' of the marker MK4 to the position Xm2' of the marker MK2'.
  • the determination unit 397 calculates the average value ⁇ a of the angles ⁇ 12, ⁇ 13, and ⁇ 34, and also calculates the average value ⁇ a' of the angles ⁇ 12', ⁇ 13', and ⁇ 34', and calculates the difference ⁇ (absolute value) between the average value ⁇ a and the average value ⁇ a'. If the difference ⁇ is equal to or less than a predetermined threshold, the determination unit 397 determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured match. On the other hand, if the difference ⁇ is greater than the predetermined threshold, it determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured do not match. In this case, a warning message indicating that the patients are different is displayed on the display unit 31 of the AR glasses 30.
  • the degree of agreement may be calculated based on the difference ⁇ , and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30.
  • the degree of agreement is expressed as a percentage, for example, and can be calculated using a formula in which the degree of agreement is 100% when the difference ⁇ is 0, and decreases as the difference ⁇ increases.
  • the determination unit 397 may also determine the degree of match between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured by determining whether the sum of the squares of the differences between the corresponding angles is equal to or less than a predetermined threshold value.
  • the determination unit 397 calculates the difference ⁇ 12 between the angles ⁇ 12 and ⁇ 12', the difference ⁇ 13 between the angles ⁇ 13 and ⁇ 13', and the difference ⁇ 34 between the angles ⁇ 23 and ⁇ 23'.
  • the square sum of the difference ⁇ 12, the difference ⁇ 13, and the difference ⁇ 23 is calculated. If the calculated square sum is equal to or less than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured match. On the other hand, if the calculated square sum is greater than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured do not match.
  • the degree of agreement may be calculated based on the calculated square sum, and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30. The degree of agreement is expressed, for example, as a percentage, and can be calculated using a calculation formula in which the degree of agreement is 100% when the calculated square sum is 0, and the degree of agreement decreases as the calculated square sum increases.
  • marker MK4 is set as the specific marker, but any of markers MK1 to MK3 may be set as the specific marker.
  • the determination unit 397 may also determine the degree of match between the patient displayed on the AR glasses 30 and the patient from whom the region of interest image was captured, based on the distance between the position of each marker M1 to MK4 identified by the marker identification unit 391 and the position of each marker MK1' to MK4' in the CT image Pcc as the region of interest image, which corresponds to the position of each marker MK1 to M4.
  • the coordinates Xm1' to Xm4' in the coordinate system Cx1 of the markers MK1' to MK4' in the CT image Pcc are transformed into the coordinate system Cx2 of the AR glasses 30 by affine transformation or the like, and the degree of match between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured is determined based on the transformed coordinates Xm1" to Xm4" and the coordinates Xm1 to Xm4 of the markers MK identified in the AR glasses 30.
  • the sum of squares of the distance D1 between coordinate Xm1" and coordinate Xm1, the distance D2 between coordinate Xm2" and coordinate Xm2, the distance D3 between coordinate Xm3" and coordinate Xm3, and the distance D4 between coordinate Xm4" and coordinate Xm4 is calculated, and if the calculated sum of squares is equal to or less than a threshold, it is determined that the patient displayed on the AR glasses 30 matches the patient whose region of interest image was captured, and if the calculated sum of squares is greater than the threshold, it is determined that the patient displayed on the AR glasses 30 does not match the patient whose region of interest image was captured.
  • the sum of squares may be calculated in the same manner as above based on the markers MK1'-MK4' in the CT image Pcc and the coordinates Xm1-Xm4 of the markers MK identified in the AR glasses 30, and the degree of match between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured may be determined based on the calculated sum of squares.
  • the degree of match may be calculated based on the calculated sum of squares and displayed on the display unit 31 of the AR glasses 30.
  • the degree of match is expressed as a percentage, for example, and can be calculated using a formula in which the degree of match is 100% when the calculated sum of squares is 0, and the degree of match decreases as the calculated sum of squares increases.
  • the system of the sixth embodiment can determine whether the patient displayed on the AR glasses 30 matches the patient whose region of interest image was captured. This can prevent a patient other than the patient whose CT image was captured from mistakenly receiving treatment from the doctor DR.
  • the marker MK4 is used to identify the patient, but it may also be used as a marker for accurate alignment when the actual image Ppt of the patient PT and the CT image Pcc are superimposed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

This image display device acquires a region-of-interest image obtained by attaching markers to the body surface of a patient and capturing an image of a region of interest of the patient together with the markers. A line-of-sight guide unit guides the line of sight of a user to one of the markers in an image display unit. A marker identification unit identifies the marker in a region corresponding to the line of sight of the user guided by the line-of-sight guide unit. A conversion unit performs, for the region-of-interest image, conversion for aligning the marker in the region-of-interest image to the marker identified by the marker identification unit. The region-of-interest image converted by the conversion unit is displayed on the image display unit.

Description

画像表示装置、及び画像表示制御プログラムImage display device and image display control program

 本発明は、画像表示装置、及び画像表示制御プログラムに関し、より詳しくは関心領域画像の表示のための画像表示装置及び画像表示制御プログラムに関する。 The present invention relates to an image display device and an image display control program, and more specifically to an image display device and an image display control program for displaying an image of a region of interest.

 医師は、患者に対する処置(注射、切開、カテーテルの挿入など)を行う場合において、患者の臓器や組織の位置・状態を正確に把握して処置を実行する必要がある。例えば、呼吸器の外科手術を行う場合には、胸腔ドレナージという処置が実行される。胸腔ドレナージは、患者の体にカテーテル等を挿入し、胸腔に溜まった血液や膿、空気をカテーテルを介して体外に排出する処置である。胸腔ドレナージにおいてカテーテルを患者の体内に挿入するにあたっては、患者の臓器の位置や状態を正確に把握し、臓器にカテーテルが達してしまうことは避けなければならない。 When performing procedures on patients (injections, incisions, catheter insertion, etc.), doctors need to accurately grasp the position and condition of the patient's organs and tissues before carrying out the procedure. For example, when performing respiratory surgery, a procedure called thoracic drainage is performed. Thoracic drainage is a procedure in which a catheter is inserted into the patient's body and blood, pus, and air that has accumulated in the thoracic cavity is drained out of the body via the catheter. When inserting a catheter into the patient's body for thoracic drainage, the position and condition of the patient's organs must be accurately grasped to avoid the catheter reaching the organs.

 胸腔ドレナージを行う場合において、臓器の位置等を正確に把握して安全にカテーテルを患者の体内に正確に挿入するための技術は、現在のところ確立されていない。一般に、カテーテルを患者の体内に挿入する場合、従来は超音波装置を使って体内の臓器の位置を確認し、臓器を避けながらカテーテルを挿入することが行われている。しかし、胸腔ドレナージでは、超音波が胸腔の空気により跳ね返されてしまうため、超音波装置を利用することができない。そもそも、超音波装置はその取扱いが非常に難しく、カテーテルの挿入を担当する医師にとって、超音波装置を操作しつつ各種処置を行うことは非常に困難である。 When performing thoracic drainage, there is currently no established technology for accurately determining the position of organs and safely inserting a catheter into a patient's body. Generally, when inserting a catheter into a patient's body, an ultrasound device is used to confirm the position of the organs inside the body and the catheter is inserted while avoiding the organs. However, ultrasound devices cannot be used for thoracic drainage because ultrasound waves are reflected by the air in the thoracic cavity. To begin with, ultrasound devices are very difficult to use, and it is very difficult for the doctor in charge of inserting the catheter to perform various procedures while operating the ultrasound device.

 このような状況から、胸腔ドレナージにおいては、事前にCT装置を用いて関心領域画像としての胸部CT画像を取得・分析し、医師が予めこのCT画像を参照して胸腔ドレナージに当たることが行われている。しかし、胸部CT画像から得られるものは、身体を輪切りにした画像であって、3次元的な臓器の位置の把握は、医師の経験や感覚に依存せざるを得ない。このように、胸部CT画像を事前に取得・分析する方法も、患者の臓器の位置等を的確に把握するには十分でなく、胸腔ドレナージにおいてカテーテルを正確な位置に挿入することは、医師の熟練度により左右されてしまうのが現状である。以上は胸腔ドレナージにおける問題であるが、他の処置においても同様の問題が生じ得る。すなわち、各種処置を行う場合において、患者の体内の組織や臓器の様子を把握することが容易ではないため、処置具を患者のどの位置に挿入すればよいかを判断することは難しい。 In light of this situation, in thoracic drainage, a CT scanner is used to obtain and analyze a chest CT image as an image of the region of interest in advance, and doctors refer to this CT image beforehand when performing thoracic drainage. However, what is obtained from a chest CT image is an image of a cross-section of the body, and understanding the three-dimensional position of organs must depend on the doctor's experience and intuition. Thus, the method of obtaining and analyzing chest CT images in advance is not sufficient to accurately determine the position of the patient's organs, and the current situation is that the insertion of a catheter into the correct position during thoracic drainage depends on the doctor's level of skill. The above are problems with thoracic drainage, but similar problems can occur in other treatments. In other words, when performing various treatments, it is difficult to determine where in the patient's body the treatment tool should be inserted because it is not easy to understand the state of the tissues and organs inside the patient's body.

 近年、VR(Virtual Reality)、AR(Augmented Reality)、MR(Mixed Reality)等のxR技術の進展に伴い、例えばARグラス等の画像表示装置に関心領域画像と患者の現実の画像とを同一画面に重畳表示することを可能にした画像表示システムも提案されている(例えば特表2021-505226号公報参照)。このようなシステムによれば、医師は画像表示装置を介して関心領域画像と患者の現実の画像とを同時に観察することが可能となる。 In recent years, with the advancement of xR technologies such as VR (Virtual Reality), AR (Augmented Reality), and MR (Mixed Reality), image display systems have been proposed that enable an image of a region of interest and an image of the patient's reality to be superimposed on the same screen on an image display device such as AR glasses (see, for example, Published Japanese Translation of PCT International Publication No. 2021-505226). Such a system allows a doctor to simultaneously observe an image of the region of interest and an image of the patient's reality via the image display device.

 従来の画像表示システムでは、予め撮像された関心領域画像と、現実の患者の画像(実画像)とを位置合わせする必要が生じる。しかし、単に関心領域画像と患者の画像とを位置合わせした場合、位置合わせ及び位置合わせ後の画像処理のための演算量が膨大となり、表示速度に影響が大きいという問題がある。位置合わせ動作が不完全である場合、関心領域画像は現実の患者の画像において正確な位置に表示されず、上記の問題が依然として残ることになる。  In conventional image display systems, it is necessary to align a pre-captured image of the region of interest with an image of the actual patient (real image). However, simply aligning the image of the region of interest with the image of the patient poses the problem that the amount of calculation required for alignment and image processing after alignment becomes enormous, significantly affecting the display speed. If the alignment operation is imperfect, the image of the region of interest will not be displayed in the correct position on the image of the actual patient, and the above problem will still remain.

 本発明は、このような問題に鑑みてなされたものであり、関心領域画像と現実の患者の画像との位置合わせを少ない演算量で迅速に実行することを可能にした画像表示装置及び画像表示制御プログラムを提供するものである。 The present invention was made in consideration of these problems, and provides an image display device and an image display control program that make it possible to quickly align an image of a region of interest with an image of a real patient with a small amount of calculations.

 上記の課題の解決のため、本発明に係る画像表示装置は、患者の体表面にマーカを貼付して前記患者の関心領域を前記マーカと共に撮像して得られた関心領域画像を取得する画像取得部と、ユーザの視野に設けられる画像表示部において、ユーザの視線を前記マーカの1つに誘導する視線誘導部と、前記視線誘導部により誘導された前記ユーザの視線に対応する領域において前記マーカを識別するマーカ識別部と、前記関心領域画像の中の前記マーカを前記マーカ識別部で識別された前記マーカに位置合わせするための変換を前記関心領域画像について行う変換部とを備え、前記画像表示部には、前記変換部で変換された前記関心領域画像が表示されることを特徴とする。 In order to solve the above problems, the image display device according to the present invention comprises an image acquisition unit that acquires an area of interest image obtained by attaching a marker to the surface of a patient's body and capturing an image of the patient's area of interest together with the marker, an image display unit provided in the user's field of view that includes a gaze guidance unit that guides the user's gaze to one of the markers, a marker identification unit that identifies the marker in the area corresponding to the user's gaze guided by the gaze guidance unit, and a conversion unit that performs a conversion on the area of interest image to align the marker in the area of interest image with the marker identified by the marker identification unit, and the image display unit displays the area of interest image converted by the conversion unit.

 また、本発明に係る画像表示制御プログラムは、患者の体表面にマーカを貼付して前記患者の関心領域を前記マーカと共に撮像して得られた関心領域画像を取得するステップと、前記マーカを貼付された前記患者と共に前記関心領域画像を画像表示部に表示させるステップと、前記画像表示部において、ユーザの視線を前記マーカの1つに誘導するステップと、誘導された前記ユーザの視線に対応する領域において前記マーカを識別するステップと、前記関心領域画像の中の前記マーカを、識別された前記マーカに位置合わせするための変換を前記関心領域画像について行うステップと、前記変換の後の前記関心領域画像を前記画像表示部に表示させるステップとをコンピュータに実行させることが可能に構成されたことを特徴とする。 The image display control program of the present invention is characterized in that it is configured to enable a computer to execute the steps of: attaching a marker to the surface of a patient's body and capturing an image of the patient's region of interest together with the marker to obtain an image of the region of interest; displaying the image of the region of interest together with the patient to which the marker is attached on an image display unit; guiding the user's line of sight to one of the markers on the image display unit; identifying the marker in the region of interest corresponding to the guided line of sight of the user; performing a transformation on the image of the region of interest to align the marker in the image of the region of interest with the identified marker; and displaying the image of the region of interest after the transformation on the image display unit.

 本発明によれば、関心領域画像と現実の患者の画像との位置合わせを少ない演算量で迅速に実行することを可能にした画像表示装置及び画像表示制御プログラムを提供することができる。 The present invention provides an image display device and an image display control program that enable rapid alignment of an image of a region of interest with an image of a real patient with a small amount of calculation.

第1の実施の形態の画像表示装置を含む医療処置支援システム1の全体構成を説明する概略図である。1 is a schematic diagram illustrating an overall configuration of a medical procedure support system 1 including an image display device according to a first embodiment. 画像サーバ20のハードウエア構成の一例を説明するブロック図である。FIG. 2 is a block diagram illustrating an example of a hardware configuration of an image server 20. ARグラス30の構成の一例を説明するブロック図である。A block diagram illustrating an example of the configuration of the AR glasses 30. CT画像PccにおけるマーカMKの座標Xm1´~Xm3´、及びARグラス30におけるマーカMKの座標Xm1~Xm3の算出について説明する概念図である。13 is a conceptual diagram for explaining calculation of coordinates Xm1' to Xm3' of a marker MK in a CT image Pcc and coordinates Xm1 to Xm3 of the marker MK in the AR glasses 30. FIG. マーカMKの患者PTの体表面への貼付のしかたの一例を示す概略図である。1 is a schematic diagram showing an example of how to attach a marker MK to the body surface of a patient PT. CT装置10を用いたCT画像Pccの撮像及びマーカMKの座標の算出の手順を示すフローチャートである。10 is a flowchart showing a procedure for capturing a CT image Pcc using the CT device 10 and calculating the coordinates of a marker MK. ARグラス30におけるCT画像Pccと患者PTの実際の画像Pptとを重畳表示する手順を示すフローチャートである。13 is a flowchart showing a procedure for superimposing and displaying a CT image Pcc and an actual image Ppt of the patient PT through the AR glasses 30. マーカMKへの視線誘導をする場合における画面表示の一例を示している。13 shows an example of a screen display when guiding the line of sight to a marker MK. 第2の実施の形態を説明する概略図である。FIG. 11 is a schematic diagram illustrating a second embodiment. 第2の実施の形態を説明する概略図である。FIG. 11 is a schematic diagram illustrating a second embodiment. 第2の実施の形態のシステム1の動作を説明するフローチャートである。10 is a flowchart illustrating an operation of the system 1 according to the second embodiment. 第2の実施の形態のシステム1の動作を説明するフローチャートである。10 is a flowchart illustrating an operation of the system 1 according to the second embodiment. 第2の実施の形態の変形例を示す。13 shows a modification of the second embodiment. 第3の実施の形態に係る医療処置支援システム1を説明するブロック図である。FIG. 13 is a block diagram illustrating a medical procedure support system 1 according to a third embodiment. 第4の実施の形態の医療処置支援システム1の構成を説明する概略図である。FIG. 13 is a schematic diagram illustrating a configuration of a medical procedure support system 1 according to a fourth embodiment. 第5の実施の形態の医療処置支援システム1の構成を説明する概略図である。FIG. 13 is a schematic diagram illustrating a configuration of a medical procedure support system 1 according to a fifth embodiment. 第6の実施の形態の医療処置支援システム1の構成を説明する概略図である。FIG. 13 is a schematic diagram illustrating a configuration of a medical procedure support system 1 according to a sixth embodiment. マーカMKの患者PTの体表面への貼付のしかたの一例を示す概略図である。図である。13 is a schematic diagram showing an example of how to attach a marker MK to the body surface of a patient PT. 患者の一致度の判定について説明するための図である。FIG. 13 is a diagram for explaining determination of a degree of match of a patient. 患者の一致度の判定について説明するための図である。FIG. 13 is a diagram for explaining determination of a degree of match of a patient.

 以下、添付図面を参照して本実施形態について説明する。添付図面では、機能的に同じ要素は同じ番号で表示される場合もある。なお、添付図面は本開示の原理に則った実施形態と実装例を示しているが、これらは本開示の理解のためのものであり、決して本開示を限定的に解釈するために用いられるものではない。本明細書の記述は典型的な例示に過ぎず、本開示の特許請求の範囲又は適用例を如何なる意味においても限定するものではない。 The present embodiment will be described below with reference to the attached drawings. In the attached drawings, functionally identical elements may be indicated by the same numbers. Note that the attached drawings show embodiments and implementation examples according to the principles of the present disclosure, but these are for the purpose of understanding the present disclosure and are in no way intended to limit the interpretation of the present disclosure. The descriptions in this specification are merely typical examples and are not intended to limit the scope or application examples of the present disclosure in any way.

 本実施形態では、当業者が本開示を実施するのに十分詳細にその説明がなされているが、他の実装・形態も可能で、本開示の技術的思想の範囲と精神を逸脱することなく構成・構造の変更や多様な要素の置き換えが可能であることを理解する必要がある。従って、以降の記述をこれに限定して解釈してはならない。 In this embodiment, the disclosure has been described in sufficient detail for a person skilled in the art to implement the disclosure, but it should be understood that other implementations and forms are possible, and that changes to the configuration and structure and substitutions of various elements are possible without departing from the scope and spirit of the technical ideas of the disclosure. Therefore, the following description should not be interpreted as being limited to this.

[第1の実施の形態]
 図1を参照して、第1の実施の形態の画像表示装置を含む医療処置支援システム1の全体構成を説明する。この医療処置支援システム1は、一例として、胸腔ドレナージを実行することを支援するためのシステムを例示しているが、本発明はこれに限定されるものではなく、他の処置、例えば中心静脈圧測定(CVP)や硬膜外麻酔などを支援するシステムにおいても同様の構成を採用することが可能である。
[First embodiment]
The overall configuration of a medical procedure support system 1 including an image display device according to a first embodiment will be described with reference to Fig. 1. As an example, the medical procedure support system 1 illustrates a system for supporting the execution of thoracic drainage, but the present invention is not limited to this, and a similar configuration can also be adopted in systems for supporting other procedures, such as central venous pressure measurement (CVP) and epidural anesthesia.

 図1のシステムは、一例として、画像撮像装置としてのCT装置10と、画像サーバ20と、画像表示装置としてのARグラス30と、ディスプレイ40とを備え得る。本システム1において、胸腔ドレナージを受ける患者PTは、まずCT装置10において、関心領域としての胸腔付近のCT画像Pcc(Pcc0)を取得する。CT画像Pccは、胸腔ドレナージの実行時においてARグラス30に表示される。なお、本明細書では、このCT画像Pcc(Pcc0)のように、関心領域とその周辺の臓器等を含む画像を総称して「関心領域画像」と呼ぶことがある。 The system in FIG. 1 may, as an example, comprise a CT device 10 as an image capturing device, an image server 20, AR glasses 30 as an image display device, and a display 40. In this system 1, a patient PT undergoing thoracic drainage first acquires a CT image Pcc (Pcc0) of the area of interest near the thoracic cavity in the CT device 10. The CT image Pcc is displayed on the AR glasses 30 when thoracic drainage is performed. Note that in this specification, an image that includes the area of interest and the surrounding organs, etc., such as this CT image Pcc (Pcc0), may be collectively referred to as a "area of interest image."

 患者PTは、CT装置10の撮像時、及び胸腔ドレナージの実行時において、その体表面にマーカMKを貼付される。マーカMKは、CT画像Pccと患者PTの実際の画像とをARグラス30において位置合わせするために使用される。一例としては、図1に示すように、患者の体表面において、関心領域を囲うように少なくとも3つのマーカMK1~3が貼付される。マーカMKに基づく各種演算や動作については後述する。一人の患者PTに貼付されるマーカMKの数は、少なくとも3個であればよいが、より正確な位置合わせを行うため、3より大きい数とすることは可能である。 The patient PT has markers MK affixed to their body surface when imaging with the CT device 10 and when performing thoracic drainage. The markers MK are used to align the CT image Pcc with the actual image of the patient PT in the AR glasses 30. As an example, as shown in FIG. 1, at least three markers MK1 to 3 are affixed to the patient's body surface to surround the area of interest. Various calculations and operations based on the markers MK will be described later. At least three markers MK should be affixed to one patient PT, but it is possible to use more than three markers to achieve more accurate alignment.

 CT画像の撮像の終了後、患者PTは、胸腔ドレナージの処置室に移動して医師DRによる胸腔ドレナージを受ける。マーカMKは、後述するようにCT画像Pccの位置合わせのため、CT撮影時と同様の位置に貼付されたままとする。胸腔ドレナージを担当する医師DRは、ARグラス30を装着して、ARグラス30越しに見える患者の実際の画像Pptと、これに重畳表示されたCT画像Pccとを観察して胸腔ドレナージを実行する。重畳表示の際、後述する方法により、CT画像Pccは所定の変換の対象とされる。 After the CT images have been taken, the patient PT moves to the thoracic drainage treatment room and receives thoracic drainage from the doctor DR. The marker MK remains attached in the same position as during CT imaging in order to align the CT image Pcc as described below. The doctor DR in charge of the thoracic drainage wears AR glasses 30 and performs thoracic drainage by observing the actual image Ppt of the patient seen through the AR glasses 30 and the CT image Pcc superimposed on it. When superimposed, the CT image Pcc is subject to a specified transformation using a method described below.

 医師DRは、胸腔ドレナージのためのカテーテル60を保持して、CT画像Pcc及び患者の実際の画像Pptに基づいて、目標とする胸腔内の適切な位置にカテーテル60を挿入する。また、補助的にディスプレイ40に表示されたCT画像Pccを参照することもできる。医師DRは、ARグラス30において重畳表示されたCT画像Pccにおいて、胸郭と萎んだ肺の位置を確認し、胸腔内のデッドスペースを認識する。そして、肋間からカテーテル60を挿入する。挿入部が正しい位置と角度であることをARグラス30上に表示されたCT画像Pccで確認しつつ、カテーテル60の挿入作業を進める。 The doctor DR holds the catheter 60 for thoracic drainage and inserts it into the appropriate location within the thoracic cavity based on the CT image Pcc and the patient's actual image Ppt. The doctor DR can also refer to the CT image Pcc displayed on the display 40 as an auxiliary. The doctor DR checks the position of the rib cage and deflated lungs in the CT image Pcc superimposed on the AR glasses 30, and recognizes the dead space within the thoracic cavity. The doctor DR then inserts the catheter 60 between the ribs. The doctor DR proceeds with the insertion of the catheter 60 while checking the CT image Pcc displayed on the AR glasses 30 to make sure that the insertion site is in the correct position and angle.

 CT装置10は、患者PTの胸部のCT画像Pccを撮像するための装置である。ここでは、CT画像Pccは、胸腔ドレナージを実行する場合において、患者PTの関心領域周辺の臓器の位置や状態を把握するために用いられる。CT装置10は、患者の臓器の位置や状態を把握するための画像を撮像する撮像装置の一例であって、これに限定されるものではない。患者の臓器の位置や状態を把握するための画像が得られる限りにおいて、CT装置10以外の撮像装置、例えばMRI(Magnetic Resonance Imaging)装置が利用されてもよい。 The CT device 10 is a device for capturing a CT image Pcc of the chest of the patient PT. Here, the CT image Pcc is used to grasp the position and condition of the organs around the area of interest of the patient PT when performing thoracic drainage. The CT device 10 is an example of an imaging device that captures images for grasping the position and condition of the patient's organs, and is not limited to this. Imaging devices other than the CT device 10, such as an MRI (Magnetic Resonance Imaging) device, may be used as long as they can obtain images for grasping the position and condition of the patient's organs.

 また、CT装置10は、複数の医療処置支援システム1の間で共有されてもよい。換言すれば、医療処置支援システム1は、CT装置10を独自で備えている必要はなく、例えば外部機関に存在するCT装置10で撮像されたCT画像を取得可能に構成されていれば十分である。CT装置10と画像サーバ20は、LAN(Local Area Network)、WAN(Wide Area Network)、インターネット、専用回線等により接続されてもよい。また、画像サーバ20とARグラス30とも、同一の組織(病院等)に存在する必要はなく、ARグラス30を管理する一組織は、他組織に存在する画像サーバ20に適宜アクセスするような仕組みにされていてもよい。 Furthermore, the CT device 10 may be shared among multiple medical treatment support systems 1. In other words, the medical treatment support system 1 does not need to have its own CT device 10, and it is sufficient if it is configured to be able to acquire CT images taken by a CT device 10 located at an external institution, for example. The CT device 10 and the image server 20 may be connected by a LAN (Local Area Network), a WAN (Wide Area Network), the Internet, a dedicated line, etc. Furthermore, the image server 20 and the AR glasses 30 do not need to be located in the same organization (hospital, etc.), and a system may be set up such that an organization managing the AR glasses 30 can access an image server 20 located in another organization as appropriate.

 画像サーバ20は、撮像されたCT画像Pccを管理すると共に、ARグラス30における3次元表示のための画像処理を実行するコンピュータである。ARグラス30は、画像サーバ20から得られたCT画像Pccを、患者PTの実際の画像Pptと共に重畳表示する。 The image server 20 is a computer that manages the captured CT images Pcc and executes image processing for three-dimensional display in the AR glasses 30. The AR glasses 30 superimpose and display the CT images Pcc obtained from the image server 20 together with the actual image Ppt of the patient PT.

 図2を参照して、画像サーバ20のハードウエア構成の一例を説明する。画像サーバ20は、一例として、演算制御装置としてのCPU(Central Processing Unit)21、入出力インタフェース22、RAM(Random Access Memory)23、ROM(Read Only Memory)24、ストレージ装置25、3Dモデル化エンジン26、及び3Dモデル表示エンジン27を備え得る。 An example of the hardware configuration of the image server 20 will be described with reference to FIG. 2. As an example, the image server 20 may include a CPU (Central Processing Unit) 21 as an arithmetic and control device, an input/output interface 22, a RAM (Random Access Memory) 23, a ROM (Read Only Memory) 24, a storage device 25, a 3D modeling engine 26, and a 3D model display engine 27.

 CPU21は、画像サーバ20の各種動作に係る各種演算を担当する演算装置である。CPU21に加えて、画像処理を担当するGPU(Graphical Processing Unit)が設けられてもよい。入出力インタフェース22は、CT装置10を含む外部装置との間のデータや信号の入出力を担当するインタフェース装置である。RAM23は、画像サーバ20に格納される画像表示制御プログラムの実行時における各種演算データ等を一時保持する機能を有する。ROM24は、BIOSや周辺機器のファームウエアを記憶している。ストレージ装置25は、上述の画像表示制御プログラムの他、CT装置10で撮像されたCT画像Pccを記憶する記憶装置であり、例えばハードディスクドライブ装置、ソリッドステートドライブ装置である。RAM23、ROM24、ストレージ装置25は、記憶装置の一例であって、これとは異なる記憶装置の構成が採用可能であるのは言うまでもない。 The CPU 21 is a calculation device that handles various calculations related to various operations of the image server 20. In addition to the CPU 21, a GPU (Graphical Processing Unit) that handles image processing may be provided. The input/output interface 22 is an interface device that handles input and output of data and signals between external devices including the CT device 10. The RAM 23 has a function of temporarily storing various calculation data and the like during execution of the image display control program stored in the image server 20. The ROM 24 stores the BIOS and firmware of peripheral devices. The storage device 25 is a storage device that stores the above-mentioned image display control program as well as the CT image Pcc captured by the CT device 10, and is, for example, a hard disk drive device or a solid state drive device. The RAM 23, ROM 24, and storage device 25 are examples of storage devices, and it goes without saying that other storage device configurations can be adopted.

 3Dモデル化エンジン26は、撮像されたCT画像Pcc0(2次元画像)を、3次元画像のCT画像Pccに変換する演算部である。また、3Dモデル表示エンジン27は、3次元画像のCT画像Pccを、ARグラス30における表示に適したデータ形式に変換する演算部である。3Dモデル化エンジン26及び3Dモデル表示エンジンは、CPU21及びプログラムにより実現されてもよいし、CPU21とは別の画像処理プロセッサやGPUにより実現されてもよい。 The 3D modeling engine 26 is a calculation unit that converts the captured CT image Pcc0 (two-dimensional image) into a three-dimensional CT image Pcc. The 3D model display engine 27 is a calculation unit that converts the three-dimensional CT image Pcc into a data format suitable for display in the AR glasses 30. The 3D modeling engine 26 and the 3D model display engine may be realized by the CPU 21 and a program, or may be realized by an image processing processor or GPU separate from the CPU 21.

 図3を参照して、画像表示装置としてのARグラス30の構成の一例を説明する。ARグラス30は、一例として、表示部31、導光部32、スピーカ33、マイクロホン34、センサ35、無線通信部36、可視光/赤外線カメラ37、ARグラス制御部38を備え得る。また、ARグラス30は、画像表示を制御するための画像表示制御プログラムを格納している。ARグラス制御部38はさらに、CPU381、RAM382、及びROM383を備えている。 With reference to FIG. 3, an example of the configuration of the AR glasses 30 as an image display device will be described. As an example, the AR glasses 30 may include a display unit 31, a light guide unit 32, a speaker 33, a microphone 34, a sensor 35, a wireless communication unit 36, a visible light/infrared camera 37, and an AR glasses control unit 38. The AR glasses 30 also store an image display control program for controlling the image display. The AR glasses control unit 38 further includes a CPU 381, a RAM 382, and a ROM 383.

 表示部31は、画像サーバ20から無線通信部36を介して受信された3次元の画像データPccを表示させる表示装置であり、例えばマイクロディスプレイである。無線通信部36及びRAM382(メモリ)は、ARグラス30においてCT画像を取得する画像取得部として機能する。画像取得部で取得されたCT画像が、画像表示部としての表示部31に表示される。導光部32は、表示部31が発する光をユーザの眼前まで導光して、ユーザの眼前に画像データPccの画像を表示させる部材である。一例として、導光部32は、ユーザの眼前に配置されて、透光性を有すると共に表示部31からの光を導光可能にされたホログラフィックレンズにより構成することができる。表示部31及び導光部32の構成は、画像データPccが患者PTの実際の画像Pptと重畳表示することができる限りにおいて、別の構成を採用することができることは言うまでもない。例えば、表示部31及び導光部32は、レーザスキャナ等により置換することが可能である。 The display unit 31 is a display device that displays the three-dimensional image data Pcc received from the image server 20 via the wireless communication unit 36, and is, for example, a microdisplay. The wireless communication unit 36 and the RAM 382 (memory) function as an image acquisition unit that acquires CT images in the AR glasses 30. The CT image acquired by the image acquisition unit is displayed on the display unit 31 as an image display unit. The light guide unit 32 is a member that guides the light emitted by the display unit 31 to the user's eyes and displays the image of the image data Pcc in front of the user's eyes. As an example, the light guide unit 32 can be configured by a holographic lens that is arranged in front of the user's eyes, has translucency, and is capable of guiding light from the display unit 31. It goes without saying that the configuration of the display unit 31 and the light guide unit 32 can adopt a different configuration as long as the image data Pcc can be displayed superimposed on the actual image Ppt of the patient PT. For example, the display unit 31 and the light guide unit 32 can be replaced by a laser scanner or the like.

 スピーカ33は、画像サーバ20や外部装置から受信された音声データに基づく音声を発してユーザ(医師DR)に情報を音声で伝達するための装置である。また、マイクロホン34は、ユーザが発した声(命令等)を検知して音声データに変換するための装置である。マイクロホン34が出力する音声データは、適宜画像サーバ20にも伝送され、画像サーバ20において収集・保存され得る。 The speaker 33 is a device that emits sound based on audio data received from the image server 20 or an external device to communicate information to the user (doctor DR) by voice. The microphone 34 is a device that detects the voice (commands, etc.) emitted by the user and converts it into audio data. The audio data output by the microphone 34 is also transmitted to the image server 20 as appropriate, and can be collected and stored in the image server 20.

 センサ35は、ARグラス30の移動、回転、傾斜等を検知したり、ARグラス30の眼前にある物体の深度を検知したりするためのセンサであり、例えば、加速度計、ジャイロ、磁力計、赤外線検知装置、深度センサ等が含まれる。 The sensor 35 is a sensor for detecting the movement, rotation, tilt, etc. of the AR glasses 30 and for detecting the depth of an object in front of the AR glasses 30, and includes, for example, an accelerometer, a gyro, a magnetometer, an infrared detection device, a depth sensor, etc.

 また、可視光/赤外線カメラ37は、例えば以下の動作を実行するための撮像装置である。
 ・ARグラス30の周囲の空間構造を決定する
 ・決定された空間構造におけるARグラス30の位置Xgを把握する
 ・ユーザの視線を検知する
 ・ユーザの眼前の物体の画像を撮像する
 ・マーカMKの位置Xmを把握する
 カメラの種類、及び台数、配置等は、その目的や仕様に応じて様々に変更可能である。
The visible light/infrared camera 37 is an imaging device for performing the following operations, for example.
- Determine the spatial structure around the AR glasses 30 - Grasp the position Xg of the AR glasses 30 in the determined spatial structure - Detect the user's line of sight - Capture an image of the object in front of the user's eyes - Grasp the position Xm of the marker MK The type, number, placement, etc. of the cameras can be changed in various ways depending on the purpose and specifications.

 ARグラス30に格納される画像表示制御プログラムは、実行されることにより、マーカ識別部391、座標演算部392、視線誘導部393、器具先端識別部394、及び変換部395をARグラス30内に実現させる。なお、ARグラス30における演算負荷の軽減のため、これらのプログラムは、画像サーバ20や他のコンピュータにおいて実行されてもよい。 The image display control program stored in the AR glasses 30, when executed, realizes a marker identification unit 391, a coordinate calculation unit 392, a gaze guidance unit 393, an instrument tip identification unit 394, and a conversion unit 395 within the AR glasses 30. Note that, in order to reduce the calculation load on the AR glasses 30, these programs may be executed on the image server 20 or another computer.

 マーカ識別部391は、ARグラス30においてユーザに提示されたビューにおいて、患者PTの体表面に貼付されたマーカMKを識別するための画像処理を実行する機能を有する。座標演算部392は、CT画像Pccに含まれるマーカMKの座標(第1座標)、及びマーカ識別部391で識別されたマーカMKの座標(第2座標)を演算する。図4に示すように、CT画像PccにおけるマーカMKの座標Xm1´~Xm3´は、CT装置10内の基準点を基準として設定された座標系Cx1の座標として計算される。一方、ARグラス30においてマーカ識別部391で識別されたマーカMKの座標Xm1~Xm3は、例えばARグラス30の位置Xgを基準点とした座標系Cx2の座標として計算され得る。本実施の形態では、マーカ識別のための演算量の低減のため、マーカ識別部391は、後述する視線誘導部393により誘導された視線に対応する一部の領域においてのみマーカの識別動作を実行する。 The marker identification unit 391 has a function of executing image processing to identify the marker MK affixed to the body surface of the patient PT in the view presented to the user by the AR glasses 30. The coordinate calculation unit 392 calculates the coordinates (first coordinates) of the marker MK included in the CT image Pcc and the coordinates (second coordinates) of the marker MK identified by the marker identification unit 391. As shown in FIG. 4, the coordinates Xm1' to Xm3' of the marker MK in the CT image Pcc are calculated as coordinates of a coordinate system Cx1 set based on a reference point in the CT device 10. On the other hand, the coordinates Xm1 to Xm3 of the marker MK identified by the marker identification unit 391 in the AR glasses 30 can be calculated as coordinates of a coordinate system Cx2, for example, based on the position Xg of the AR glasses 30. In this embodiment, in order to reduce the amount of calculation required for marker identification, the marker identification unit 391 performs the marker identification operation only in a partial area that corresponds to the line of sight guided by the line of sight guidance unit 393, which will be described later.

 視線誘導部393は、ARグラス30のユーザの視線をマーカMKに誘導する機能を有する。後述するように、視線誘導部393は、マーカMKの1つに視線を向けるようにユーザを誘導し、誘導の結果、マーカMKに視線を向けている状態が得られた場合、その視線の方向に対応する一部の領域においてマーカ識別部391がマーカ識別動作を実行する。これにより、マーカ識別動作を実行する領域が限定され、演算量を低減することができる。器具先端識別部394は、カテーテル等の処置具の先端を識別するための部分である。変換部395は、座標演算部392で演算された座標Xm1´~Xm3´と座標Xm1~Xm3を比較し、座標Xm1´~Xm3´を、座標Xm1~Xmに合致させるような変換を実行する。このような変換が実行されることにより、CT画像Pccを正確にARグラス30で観察される患者PTの実際の画像に合わせこむことが可能になり、胸腔ドレナージを正確に行うことを支援することができる。変換部395における演算は、アフィン変換のような線形変換であってもよいが、例えば非線形変換のアルゴリズムには、B-spline法等を用いた非線形変換であってもよい。 The gaze guidance unit 393 has the function of guiding the gaze of the user of the AR glasses 30 to the marker MK. As described below, the gaze guidance unit 393 guides the user to direct the gaze to one of the markers MK, and when the result of the guidance is that the gaze is directed to the marker MK, the marker identification unit 391 performs a marker identification operation in a portion of the area corresponding to the direction of the gaze. This limits the area in which the marker identification operation is performed, making it possible to reduce the amount of calculation. The instrument tip identification unit 394 is a part for identifying the tip of a treatment tool such as a catheter. The conversion unit 395 compares the coordinates Xm1' to Xm3' calculated by the coordinate calculation unit 392 with the coordinates Xm1 to Xm3, and performs a conversion to make the coordinates Xm1' to Xm3' match the coordinates Xm1 to Xm. By performing such a transformation, it becomes possible to accurately match the CT image Pcc to the actual image of the patient PT observed through the AR glasses 30, thereby assisting in performing accurate thoracic drainage. The calculation in the transformation unit 395 may be a linear transformation such as an affine transformation, but the nonlinear transformation algorithm may also be a nonlinear transformation using, for example, the B-spline method.

 図5は、マーカMKの患者PTの体表面への貼付のしかたの一例を示している。マーカMKは、複数個、例えば3個(MK1~MK3)を1組として使われる。マーカMK1~MK3は、患者PTの体表面において、関心領域DAを囲むように貼付される。マーカMK1~3の各々は、直径2cm程度とすることができる。素材としては、例えば白色アルミナで構成することができる。人体への固定の方法は任意であるが、例えば医療用テープ(両面、片面)を利用して固定することができる。なお、CT装置10での撮像の場合、マーカMK1~3は、X線の透過度が人体に比べて低い素材であることが好適である。また、マーカMK1~3は、同一形状、同一色であってもよいが、識別の容易化のため、互いに異なる色(赤、青、黄色等)、形状(円形、四角形、三角形)、輝度等にするのが好ましい。 FIG. 5 shows an example of how to attach the markers MK to the body surface of the patient PT. A set of multiple markers, for example three markers (MK1 to MK3), is used. The markers MK1 to MK3 are attached to the body surface of the patient PT so as to surround the area of interest DA. Each of the markers MK1 to MK3 can be about 2 cm in diameter. They can be made of white alumina, for example. Any method can be used to fix them to the human body, but they can be fixed using medical tape (double-sided or single-sided). In addition, when imaging with the CT device 10, the markers MK1 to MK3 are preferably made of a material that has a lower X-ray transmittance than the human body. The markers MK1 to MK3 may be the same shape and color, but it is preferable to use different colors (red, blue, yellow, etc.), shapes (circle, square, triangle), brightness, etc. to make them easier to distinguish.

 マーカMK1~3は、胸腔ドレナージを行う場合には、一例として、マーカMK1、2を左右の鎖骨中点に各1つ、マーカMK3を季肋部(みぞおち)に貼付することができる。これらの点は、腕の動きや、呼吸等に起因した移動量が基準点に比べ少なく、少なくとも水平方向の移動量は少なく無視できる「不動点」とみなすことができる。不動点をマーカMK1~3の貼付位置として選択することで、よりCT画像Pccの位置合わせを正確に行うことが可能になる。ここで、基準点は、腕等の動きや呼吸に起因する移動量が平均程度の位置に設定することができる。 When performing thoracic drainage, as an example, markers MK1 to MK3 can be attached at the midpoint of each clavicle, with one marker MK1 and one marker MK2 attached to the hypochondrium (septum plexus). These points can be considered "fixed points" whose movement due to arm movement, breathing, etc. is smaller than that of the reference point, and whose movement in at least the horizontal direction is small and negligible. By selecting the fixed points as the attachment positions for markers MK1 to 3, it becomes possible to more accurately align the CT image Pcc. Here, the reference point can be set at a position where the amount of movement due to arm movement, breathing, etc. is approximately average.

 図6~図8を参照して、第1の実施の形態の医療処置支援システム1の動作を説明する。図6はCT装置10を用いたCT画像Pccの撮像及びマーカMKの座標の算出の手順を示すフローチャートである。図7は、ARグラス30におけるCT画像Pccと患者PTの実際の画像Pptを重畳表示する手順を示すフローチャートである。図8は、マーカMKへの視線誘導をする場合における画面表示の一例を示している。 The operation of the medical procedure support system 1 of the first embodiment will be described with reference to Figures 6 to 8. Figure 6 is a flowchart showing the procedure for taking a CT image Pcc using the CT device 10 and calculating the coordinates of the marker MK. Figure 7 is a flowchart showing the procedure for superimposing the CT image Pcc and the actual image Ppt of the patient PT on the AR glasses 30. Figure 8 shows an example of a screen display when guiding the line of sight to the marker MK.

 この医療処置支援システム1により胸腔ドレナージを実行する場合、まず患者PTの胸部にマーカMKを貼付して(ステップS11)、その後CT装置10において患者の胸部のCT画像Pcc0を撮影する(ステップS12)。なお、CT画像Pcc0の撮影においては、患者PTは、一般的なCT撮影とは異なり、両腕を下半身方向に下げて撮影を行うのが好適である。これは、その後の胸腔ドレナージにおいては両腕を下げて行うため、その時の姿勢と同じ姿勢でCT撮影をすることが好ましいからである。 When performing thoracic drainage using this medical procedure support system 1, first a marker MK is attached to the chest of the patient PT (step S11), and then a CT image Pcc0 of the patient's chest is taken with the CT device 10 (step S12). Note that when taking the CT image Pcc0, unlike typical CT scans, it is preferable for the patient PT to have both arms lowered toward the lower body during the scan. This is because subsequent thoracic drainage will be performed with both arms lowered, and it is therefore preferable to perform the CT scan in the same posture as that at that time.

 撮影されたCT画像Pcc0は、画像サーバ20に転送(取得)され、ボクセルデータとして処理された中間ファイル(Pcc’)が生成され保存される(ステップS13)。また、ボクセルデータは、ARグラス30にも転送される。ARグラス30内の座標演算部392は、このボクセルデータに基づいて3つのマーカMK1~MK3の位置を特定し、その座標Xm1´,Xm2´、Xm3´を算出する(ステップS14)。なお、ステップS14は、次の図7のフローチャートの中において、図7中のステップと並行して行われてもよい。なお、ボクセルデータは、3次元立方体の格子空間の各格子にCT値が入ったデータ(volumetric data)である。各格子のデータはボクセル値と呼ばれる。マーカMKを識別するための特定のCT値を有するボクセルを取り出すことで、マーカMKの座標Xmiを得ることができる。 The captured CT image Pcc0 is transferred (acquired) to the image server 20, where an intermediate file (Pcc') processed as voxel data is generated and saved (step S13). The voxel data is also transferred to the AR glasses 30. The coordinate calculation unit 392 in the AR glasses 30 identifies the positions of the three markers MK1 to MK3 based on this voxel data, and calculates their coordinates Xm1', Xm2', and Xm3' (step S14). Note that step S14 may be performed in parallel with the steps in the flowchart of FIG. 7 below. Note that the voxel data is volumetric data in which the CT value is entered in each lattice of a 3D cubic lattice space. The data of each lattice is called the voxel value. The coordinate Xmi of the marker MK can be obtained by extracting a voxel having a specific CT value for identifying the marker MK.

 ボクセルデータが画像サーバ20において取得・保存されると、続いて胸腔ドレナージを実行するため、患者PTは処置室に移動し(マーカMKは胸部に貼付したままとする)、医師DRはARグラス30を起動させ(ステップS21)、自身の頭部に装着する。起動後、ARグラス30は、各種センサ35や可視光/赤外線カメラ37の出力に従い、ARグラス30の周囲の空間構造S(x)を把握すると共に、ARグラス30の位置を特定し、その位置の座標Xgを算出する(ステップS22)。 Once the voxel data has been acquired and stored in the image server 20, the patient PT moves to the treatment room to perform chest drainage (the marker MK remains attached to the chest), and the doctor DR activates the AR glasses 30 (step S21) and puts them on his or her head. After activation, the AR glasses 30 grasp the spatial structure S(x) around the AR glasses 30 according to the output of the various sensors 35 and the visible light/infrared camera 37, identify the position of the AR glasses 30, and calculate the coordinate Xg of that position (step S22).

 続いて、ARグラス30の視線誘導部393は、表示部31を介してユーザである医師DRに対し、図8に示すような画面を表示させ、医師DRの視線をマーカMK1~3の1つに順次誘導する(ステップS23)。具体的には、画面の一部(例えば左下)に、「左上のマーカに視線を向けてください」との指示文OR1が表示されると共に、可視光/赤外線カメラ37の出力に従って検出された医師DRの視線の方向を示す視線表示マークGDM、及び視線を中心とする領域を示す領域表示マークCMが表示される。医師DRは、領域表示マークCMの中心付近に目的のマーカMK1が含まれていると判断した場合には、不図示の確定ボタンを押して、領域表示マークCMの位置を確定させる。その後、マーカMK2、MK3についても同様の動作が繰り返され、マーカMK1~MK3を含む領域表示マークCMの位置が特定される(ステップS24)。領域表示マークの確定は、確定ボタンの他、例えばマイクロホン34に向けて確定を指示する音声を発することによって行ってもよい。また、所定時間以上(例えば3秒以上)視線が固定されていたと判断される場合に、その視線方向を領域表示マークCMの位置と判定してもよい。 Then, the line of sight guidance unit 393 of the AR glasses 30 displays a screen as shown in FIG. 8 for the doctor DR, who is the user, via the display unit 31, and sequentially guides the doctor DR's line of sight to one of the markers MK1 to MK3 (step S23). Specifically, an instruction OR1 saying "Please look at the marker in the upper left" is displayed in a part of the screen (for example, the lower left), and a line of sight display mark GDM indicating the direction of the doctor DR's line of sight detected according to the output of the visible light/infrared camera 37 and an area display mark CM indicating the area centered on the line of sight are displayed. If the doctor DR determines that the target marker MK1 is included near the center of the area display mark CM, he or she presses a confirmation button (not shown) to confirm the position of the area display mark CM. After that, the same operation is repeated for the markers MK2 and MK3, and the position of the area display mark CM including the markers MK1 to MK3 is identified (step S24). The confirmation of the area display mark may be performed by, for example, issuing a voice commanding confirmation to the microphone 34, in addition to using the confirmation button. Also, if it is determined that the gaze is fixed for a predetermined period of time or more (for example, 3 seconds or more), the gaze direction may be determined to be the position of the area display mark CM.

 領域表示マークCMの位置が特定されると、マーカ識別部391が起動して、領域表示
マークCM内に含まれるマーカMKの位置が特定される(ステップS25)。領域表示マークCMの面積は、ARグラス30のビュー全体の面積に比べて非常に小さいため、演算量を抑制することができる。マーカMKの位置が特定されると、マーカMKの座標Xm(Xm1~3)が座標演算部292において算出される(ステップS26)。ステップS23~S26は、マーカMKの数だけ繰り返される。
When the position of the area display mark CM is specified, the marker identification unit 391 is activated, and the position of the marker MK included in the area display mark CM is specified (step S25). Since the area of the area display mark CM is very small compared to the area of the entire view of the AR glasses 30, the amount of calculation can be suppressed. When the position of the marker MK is specified, the coordinate calculation unit 292 calculates the coordinate Xm (Xm1 to 3) of the marker MK (step S26). Steps S23 to S26 are repeated as many times as the number of markers MK.

 続いて、座標Xm1´~Xm3´と座標Xm1~Xm3の位置合わせを行った後(ステップS27)、アフィン変換等により、座標系Cx1で表現されているボクセルデータを座標系Cx2での表現に変換する(ステップS28)。座標系Cx2で表現されたボクセルデータに対し、座標Xgから光が発せられたと仮定し、当該光が通過する格子のボクセル値を適宜取り扱う(ボリュームレイキャスティング)。これにより、ARグラス30に表示される3次元画像であるCT画像Pccが得られる。なお、座標Xgは、ARグラスの左右眼のそれぞれについて定義し、左右の位置からそれぞれ光を飛ばして得られた画像を表示することが可能である。これにより、視差のある画像を得て、立体視が可能となる。 Subsequently, after aligning the coordinates Xm1' to Xm3' with the coordinates Xm1 to Xm3 (step S27), the voxel data expressed in the coordinate system Cx1 is converted to an expression in the coordinate system Cx2 by affine transformation or the like (step S28). For the voxel data expressed in the coordinate system Cx2, it is assumed that light is emitted from the coordinate Xg, and the voxel values of the lattice through which the light passes are appropriately handled (volume ray casting). This results in a CT image Pcc, which is a three-dimensional image displayed on the AR glasses 30. Note that the coordinate Xg is defined for each of the left and right eyes of the AR glasses, and it is possible to display images obtained by emitting light from left and right positions, respectively. This results in an image with parallax, making stereoscopic vision possible.

 得られた3次元画像であるCT画像Pccは、表示部31に送信され、ARグラス30において、患者PTの実際の画像Pptと共に重畳表示される。ARグラス30のプログラムが画像サーバ20において実行されている場合には、CT画像の3次元画像のデータは、例えば無線通信により、無線通信部36を介して表示部31に送信される。送信レートは、30フレーム/秒程度であり、上記のようにユーザ(医師DR)が関心領域DA付近に視線を固定している場合には十分滑らかな画像として表示され得る。送信レートが90フレーム/秒まで上がると、ユーザが視線をずらしたときでも滑らかに画像を表示することができる。 The obtained three-dimensional image, CT image Pcc, is transmitted to the display unit 31 and displayed on the AR glasses 30 in a superimposed manner together with the actual image Ppt of the patient PT. When the program for the AR glasses 30 is being executed on the image server 20, the three-dimensional image data of the CT image is transmitted to the display unit 31 via the wireless communication unit 36, for example by wireless communication. The transmission rate is about 30 frames/second, and when the user (doctor DR) fixes his/her gaze near the area of interest DA as described above, the image can be displayed as being sufficiently smooth. When the transmission rate increases to 90 frames/second, the image can be displayed smoothly even when the user shifts his/her gaze.

 マーカMKをARグラス30が認識する空間構造の全体から抽出しようとすると、演算量が非常に大きくなり、例えば3フレーム/秒程度しかCT画像を送信することができなくなる。そこで、本実施の形態では、視線誘導部393によりマーカMKの方向にユーザ(医師DR)の視線を誘導し、視線の方向に視線表示マークCMを設定し、その視線表示マークCMの中においてマーカMKを認識(探索)する。これにより、探索対象となる空間範囲を狭めて、迅速に(一例として、前半を探索する場合に比べ、50~100倍程度速く)マーカMKを認識することができる。この動作を3つのマーカに関し順次行うことができる。 If an attempt is made to extract the marker MK from the entire spatial structure recognized by the AR glasses 30, the amount of calculations becomes extremely large, and, for example, CT images can only be transmitted at about 3 frames per second. Therefore, in this embodiment, the gaze guidance unit 393 guides the gaze of the user (doctor DR) in the direction of the marker MK, a gaze display mark CM is set in the gaze direction, and the marker MK is recognized (searched) within the gaze display mark CM. This narrows the spatial range to be searched, and allows the marker MK to be recognized quickly (for example, about 50 to 100 times faster than when searching the first half). This operation can be performed sequentially for the three markers.

 以上説明したように、第1の実施の形態の医療処置支援システム1によれば、CT装置10で得られたCT画像をARグラス30において重畳表示する際、視線誘導部393による視線誘導を行った後、限られた領域においてマーカMKの識別を行われる。このため、CT画像の位置合わせを少ない演算量で迅速に行うことができる。 As described above, according to the first embodiment of the medical procedure support system 1, when the CT image obtained by the CT device 10 is superimposed and displayed on the AR glasses 30, the line of sight is guided by the line of sight guidance unit 393, and then the markers MK are identified in a limited area. This makes it possible to quickly align the CT image with a small amount of calculation.

[第2の実施の形態]
 次に、図9~図13を参照して、第2の実施の形態に係る医療処置支援システム1を説明する。第2の実施の形態の医療処置支援システム1の全体構成、並びに画像サーバ20及びARグラス30の構成は、第1の実施の形態(図1~図3)と同一でよいので、重複する説明は省略する。ただし、この第2の実施の形態のシステム1では、患者PTの呼吸の状態を判定して、その状態に合わせたCT画像を表示するよう構成されている点で第1の実施の形態と異なっている。具体的には、図9に示すように、不動点に配置されるマーカMK1~3に加え、呼吸により変動する変動点に配置されるマーカMK4(呼吸検知用マーカ)が利用される。このマーカMK4の移動を検知し、移動に応じたCT画像をARグラス30において表示するようにされている。ここで、「変動点」とは、例えば患者の乳首付近(男性の場合)である。胸呼吸が強めである場合、この位置がマーカMK4の好適な貼付位置(変動点)となる。胸呼吸よりも腹式呼吸が支配的な患者の場合には、腹部の臍付近にマーカMK4を貼付するのが好ましい場合もある。
[Second embodiment]
Next, a medical treatment support system 1 according to a second embodiment will be described with reference to FIG. 9 to FIG. 13. The overall configuration of the medical treatment support system 1 according to the second embodiment, and the configurations of the image server 20 and the AR glasses 30 may be the same as those of the first embodiment (FIG. 1 to FIG. 3), so duplicated descriptions will be omitted. However, the system 1 according to the second embodiment is different from the first embodiment in that it is configured to determine the breathing state of the patient PT and display a CT image according to that state. Specifically, as shown in FIG. 9, in addition to the markers MK1 to MK3 arranged at fixed points, a marker MK4 (a marker for respiration detection) arranged at a fluctuating point that fluctuates due to breathing is used. The movement of this marker MK4 is detected, and a CT image according to the movement is displayed on the AR glasses 30. Here, the "fluctuating point" is, for example, near the patient's nipple (in the case of a man). If chest breathing is strong, this position is a suitable attachment position (fluctuation point) for the marker MK4. In the case of a patient in whom abdominal breathing is more prevalent than thoracic breathing, it may be preferable to attach the marker MK4 near the navel on the abdomen.

 第2の実施の形態のシステム1におけるCT画像の取得の概念を図10を参照して説明する。例えば、呼気が最大となる呼気状態、吸気が最大となる吸気状態、両者の中間である中間状態において、それぞれマーカMK4の座標を得ると共に、CT画像Pcc10~Pcc30を取得する。胸腔ドレナージでは、呼吸により息を吸い込んだ状態(吸気状態)と、吐き出した状態(呼気状態)で、平面方向において最大で1cmほど、CT画像のずれが生じ得る。図10の例でも、CT画像Pcc10~Pcc30の間にはずれが生じ得る。このため、第2の実施の形態のシステム1では、呼吸を考慮して複数のCT画像Pcc10~Pcc30を取得し、マーカMKの座標の値と共に関連付けて記憶しておく。 The concept of acquiring CT images in the system 1 of the second embodiment will be described with reference to FIG. 10. For example, in the expiratory state where the exhalation is at its maximum, the inhalation state where the inhalation is at its maximum, and an intermediate state between the two, the coordinates of the marker MK4 are obtained and CT images Pcc10 to Pcc30 are acquired. In thoracic drainage, a deviation of up to about 1 cm can occur in the planar direction between the state in which breath is inhaled (inhalation state) and the state in which breath is exhaled (exhalation state). Even in the example of FIG. 10, a deviation can occur between the CT images Pcc10 to Pcc30. For this reason, in the system 1 of the second embodiment, multiple CT images Pcc10 to Pcc30 are acquired taking respiration into consideration, and are stored in association with the coordinate values of the markers MK.

 図11及び図12のフローチャートを参照して第2の実施の形態のシステム1の動作を、説明する。基本的な動作は第1の実施の形態と同様であるので、重複する説明は省略するが、図10で説明したように、異なる呼吸状態毎に複数枚のCT画像の取得/保存が行われ(ステップS12´~S13´)、その複数枚のCT画像毎にマーカMKの座標Xmi´が算出される(ステップS14´)点が、第1の実施の形態とは異なっている。 The operation of the system 1 of the second embodiment will be described with reference to the flowcharts of Figures 11 and 12. The basic operation is similar to that of the first embodiment, so a duplicated explanation will be omitted, but as explained in Figure 10, it differs from the first embodiment in that multiple CT images are acquired and saved for each different respiratory state (steps S12' to S13'), and the coordinates Xmi' of the marker MK are calculated for each of the multiple CT images (step S14').

 図12に示す手順も、図7に示した手順と基本的には同一であるが、ステップS30及びステップS31において、ARグラス30のビューにおいて、マーカMK4の位置を特定し、その位置に応じたCT画像を切替表示する(表示されるCT画像を変更する)ようにしている点が第1の実施の形態と異なっている。 The procedure shown in FIG. 12 is basically the same as the procedure shown in FIG. 7, but differs from the first embodiment in that in steps S30 and S31, the position of the marker MK4 is identified in the view of the AR glasses 30, and the CT image corresponding to that position is switched (the CT image displayed is changed).

 なお、図10の例では、中間状態についても別途CT画像を撮像し、切替表示の対象としているが、これに変えて、図13に示すように、CT画像の撮像は呼気状態と吸気状態とし、中間状態のCT画像は、呼気状態及び吸気状態のCT画像に基づいた内挿処理により生成することもできる。 In the example of FIG. 10, separate CT images are taken for the intermediate state and are subject to switching display, but alternatively, as shown in FIG. 13, CT images can be taken for the exhalation and inhalation states, and the CT image of the intermediate state can be generated by interpolation processing based on the CT images of the exhalation and inhalation states.

 以上説明したように、第2の実施の形態のシステムによれば、第1の実施の形態で得られる効果が得られることに加え、患者PTの呼吸の状態に合わせたCT画像を表示することができるため、医師DRにおいて医療処置の遂行が容易になり得る。 As described above, the system of the second embodiment not only provides the same effects as the first embodiment, but also allows the display of CT images that correspond to the respiratory state of the patient PT, making it easier for the doctor DR to carry out medical procedures.

[第3の実施の形態]
 次に、図14を参照して、第3の実施の形態に係る医療処置支援システム1を説明する。第3の実施の形態の医療処置支援システム1の全体構成は、第1の実施の形態(図1)と同一でよいので、重複する説明は省略する。ただし、この第3の実施の形態では、3Dモデル化エンジン26、及び3Dモデル表示エンジン27が画像サーバ20から省略され、代わりに3Dモデル化エンジン395、及び3Dモデル表示エンジン396がARグラス30に設けられている。CT画像の3Dモデル化の機能がARグラス30側に移行したものであり、基本的な動作は第1の実施の形態と同様である。
[Third embodiment]
Next, a medical treatment support system 1 according to a third embodiment will be described with reference to Fig. 14. The overall configuration of the medical treatment support system 1 according to the third embodiment may be the same as that of the first embodiment (Fig. 1), and therefore a duplicated description will be omitted. However, in this third embodiment, the 3D modeling engine 26 and the 3D model display engine 27 are omitted from the image server 20, and instead, a 3D modeling engine 395 and a 3D model display engine 396 are provided in the AR glasses 30. The function of 3D modeling of CT images is transferred to the AR glasses 30, and the basic operation is the same as that of the first embodiment.

[第4の実施の形態]
 図15を参照して、第4の実施の形態の医療処置支援システム1の構成を説明する。システム1の全体構成は第1の実施の形態と同一であり、また、画像サーバ20及びARグラス30の構成も同一でよいので、重複する説明は省略する。この第4の実施の形態のシステム1は、ARグラス30に加えて、ARグラス30と同様の画像が観察可能であって、医師DRの教官DRTが装着するARグラス30Tが設けられている点で、第1の実施の形態とは異なっている。
[Fourth embodiment]
The configuration of the medical procedure support system 1 of the fourth embodiment will be described with reference to Fig. 15. The overall configuration of the system 1 is the same as that of the first embodiment, and the configurations of the image server 20 and the AR glasses 30 may also be the same, so duplicated descriptions will be omitted. The system 1 of the fourth embodiment differs from the first embodiment in that, in addition to the AR glasses 30, AR glasses 30T are provided that can observe images similar to those of the AR glasses 30 and are worn by the instructor DRT of the doctor DR.

 教官DRTは、指導対象の医師DRの近傍にいて、ARグラス30Tを装着している。ARグラス30Tは、ARグラス30と同様に、患者PTの実際の画像Pptを観察可能であると共に、ARグラス30と同様に、CT画像Pccを重畳表示可能にされている。教官DRTは、ARグラス30Tの表示を見て、医師DRに対する指導を行うことができる。なお、ARグラス30とARグラス30Tは、直接無線通信によりデータのやりとりを行ってもよいし、画像サーバ20を介して間接的にデータの送受信を行ってもよい。 The instructor DRT is in the vicinity of the doctor DR who is to be trained, and wears AR glasses 30T. Like the AR glasses 30, the AR glasses 30T are capable of observing the actual image Ppt of the patient PT, and like the AR glasses 30, are capable of superimposing and displaying a CT image Pcc. The instructor DRT can provide guidance to the doctor DR by looking at the display on the AR glasses 30T. The AR glasses 30 and AR glasses 30T may exchange data directly via wireless communication, or may send and receive data indirectly via the image server 20.

[第5の実施の形態]
 図16を参照して、第5の実施の形態の医療処置支援システム1の構成を説明する。図16は、第5の実施の形態の医療処置支援システム1の全体構成を示している。CT装置10の図示は省略している。
[Fifth embodiment]
The configuration of the medical treatment support system 1 according to the fifth embodiment will be described with reference to Fig. 16. Fig. 16 shows the overall configuration of the medical treatment support system 1 according to the fifth embodiment. The CT device 10 is not shown.

 この第5の実施の形態の医療処置支援システム1は、遠隔操作による胸腔ドレナージを支援するものであり、対象の患者PTは病院HP1にいて、胸腔ドレナージを担当する医師DRは、病院HP1とは地理的に離れた位置にある病院HP2にいる。病院HP1、HP2、及び画像サーバ20は、ネットワークNWにより接続されている。 The medical procedure support system 1 of the fifth embodiment supports chest drainage by remote operation, where the target patient PT is in a hospital HP1, and the doctor DR in charge of the chest drainage is in a hospital HP2 that is geographically separated from the hospital HP1. The hospitals HP1, HP2, and the image server 20 are connected by a network NW.

 病院HP1には患者PTがいて、前述の実施の形態と同一のマーカMKを貼付されている。患者PTの横には、胸腔ドレナージのカテーテル60を挿入する動作を担当するロボット70が設置されており、また、患者PTの実際の画像は、カメラ80により撮像されている。カメラ80が撮像した画像は、ネットワークNWを介してARグラス30に送信される。また、ロボット70は、病院HP2にいる医師DRが所持するコントローラ90により操作される。ARグラス30は、カメラ80から送られた患者PTの実際の画像Pptと、CT画像Pccとを重畳表示すると共に、前述の実施の形態と同様の位置合わせ及び変換を実行可能にされている。 A patient PT is present in hospital HP1, and is affixed with the same marker MK as in the previously described embodiment. Next to the patient PT is a robot 70 that is responsible for inserting a chest drainage catheter 60, and an actual image of the patient PT is captured by a camera 80. The image captured by the camera 80 is sent to the AR glasses 30 via a network NW. The robot 70 is operated by a controller 90 carried by a doctor DR in hospital HP2. The AR glasses 30 display the actual image Ppt of the patient PT sent from the camera 80 superimposed on the CT image Pcc, and are capable of performing the same alignment and conversion as in the previously described embodiment.

 この第5の実施の形態によれば、前述の実施の形態の効果を、遠隔操作による胸腔ドレナージ等の医療処理においても得ることができる。 According to this fifth embodiment, the effects of the previous embodiments can also be obtained in medical procedures such as remote-controlled thoracic drainage.

[変形例]
 以上説明した各種実施の形態では、医師DRが実行する処置は胸腔ドレナージである場合を例として説明したが、前述の通り、本発明は胸腔ドレナージに限定されるものではなく、他の処置にも適用可能である。例えば、腕の静脈や手首の動脈への注射においても、同様のシステムが採用され得る。マーカMKを前腕の静脈周囲、あるいは、手首の動脈の周囲に少なくとも3つ貼付してCT画像を撮影し、そのCT画像を同様にARグラス30に表示させることができる。
[Modification]
In the above-described various embodiments, the procedure performed by the doctor DR is thoracic drainage, but as described above, the present invention is not limited to thoracic drainage and can be applied to other procedures. For example, a similar system can be used for injections into the veins of the arm or the arteries of the wrist. At least three markers MK can be attached around the veins of the forearm or the arteries of the wrist to take CT images, and the CT images can be displayed on the AR glasses 30 in the same manner.

 また、硬膜外麻酔においても、マーカMKを背中の背骨周辺に少なくとも3つ貼付し、同様にCT撮影を行い(胸部CTは側臥位で撮像する)、その後の硬膜外麻酔時に利用することができる。ARグラス30に表示されたCT画像では、脊髄、硬膜を認識し、硬膜外腔に針を刺すことで、硬膜外麻酔を実行することができる。 Also, for epidural anesthesia, at least three markers MK are attached around the spine on the back, and a CT scan is similarly performed (chest CT scan is performed in the lateral position), which can then be used during epidural anesthesia. The CT image displayed on the AR glasses 30 identifies the spinal cord and dura mater, and epidural anesthesia can be performed by inserting a needle into the epidural space.

[その他]
 本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。例えば、CT画像の取得において、カテーテル60が存在する部分のCT画像の輝度を下げたり、カテーテル60の透過度を、患者の実際の画像に対して大きくしたりするような画像処理が実行されてもよい。また、CT画像の透過度を、ARグラス30のマイクロホン34を介して音声操作により変化させる構成が採用されてもよい。また、CT画像におけるカテーテル60等の透過度を、自動でなく、医師の好みに応じて変更可能な構成が採用されてもよい。
[others]
The present invention is not limited to the above-mentioned embodiment, and various modified examples are included. For example, the above-mentioned embodiment has been described in detail to explain the present invention in an easy-to-understand manner, and is not necessarily limited to those having all the configurations described. In addition, it is possible to replace a part of the configuration of a certain embodiment with the configuration of another embodiment, and it is also possible to add the configuration of another embodiment to the configuration of a certain embodiment. In addition, it is possible to add, delete, or replace other configurations with respect to a part of the configuration of each embodiment. For example, in acquiring a CT image, image processing may be performed such as lowering the brightness of the CT image of the part where the catheter 60 exists, or increasing the transmittance of the catheter 60 relative to the actual image of the patient. In addition, a configuration may be adopted in which the transmittance of the CT image is changed by voice operation via the microphone 34 of the AR glasses 30. In addition, a configuration may be adopted in which the transmittance of the catheter 60 in the CT image can be changed according to the doctor's preference, not automatically.

[第6の実施の形態]
 次に、図17~図20を参照して、第6の実施の形態に係る医療処置支援システム1を説明する。第6の実施の形態の医療処置支援システム1の全体構成、並びに画像サーバ20の構成は、第1の実施の形態(図1、2)と同一でよいので、重複する説明は省略する。ただし、この第6の実施の形態のシステム1では、図17に示すように、ARグラス30に格納される画像表示制御プログラムが実行された場合に、ARグラス30が更に判定部397としても機能する点が第1の実施の形態と異なっている。判定部397については後述する。
Sixth embodiment
Next, a medical procedure support system 1 according to a sixth embodiment will be described with reference to Figs. 17 to 20. The overall configuration of the medical procedure support system 1 according to the sixth embodiment and the configuration of the image server 20 may be the same as those of the first embodiment (Figs. 1 and 2), and therefore a duplicated description will be omitted. However, the system 1 according to the sixth embodiment differs from the first embodiment in that, as shown in Fig. 17, when an image display control program stored in the AR glasses 30 is executed, the AR glasses 30 also function as a determination unit 397. The determination unit 397 will be described later.

 また、第6の実施の形態のシステム1では、図18に示すように、3つのマーカMK1、MK2、MK3に加えて、マーカMK4を用いて、CT画像Pccが撮影された患者と、ARグラス30を用いて胸腔ドレナージ等の処置の実行時における患者と、の一致度を判定する点で第1の実施の形態と異なっている。 Furthermore, as shown in FIG. 18, the system 1 of the sixth embodiment differs from the first embodiment in that in addition to the three markers MK1, MK2, and MK3, a marker MK4 is used to determine the degree of match between the patient whose CT image Pcc was taken and the patient when a procedure such as thoracic drainage is performed using the AR glasses 30.

 マーカMK4は、患者特定用として用いられ、患者の呼吸による位置の変化量が基準値以下の位置に貼付される。ここで、患者の呼吸による位置の変化量が基準値以下の位置とは、第1の実施の形態で説明した不動点とみなすことができる位置である。図18の例では、マーカMK4は胸骨柄付近に貼付されているが、マーカMK4が貼付される位置は胸骨柄付近に限定されるものではない。 Marker MK4 is used to identify the patient and is affixed to a position where the amount of change in position due to the patient's breathing is below a reference value. Here, a position where the amount of change in position due to the patient's breathing is below a reference value is a position that can be considered a fixed point as described in the first embodiment. In the example of FIG. 18, marker MK4 is affixed near the manubrium, but the position at which marker MK4 is affixed is not limited to near the manubrium.

 本実施の形態では、CT装置10において、患者PTの関心領域DAを4個のマーカMK1~MK4と共に撮像し、関心領域画像としてのCT画像Pccを取得する。 In this embodiment, the CT device 10 captures an image of the region of interest DA of the patient PT together with four markers MK1 to MK4, and obtains a CT image Pcc as an image of the region of interest.

 CT画像Pccの撮像の終了後、患者PTは、処置室に移動して医師DRによる胸腔ドレナージ等の処置を受ける。患者PTは、胸腔ドレナージ等の処置を実行する際も、体表面にマーカMK1~MK4を貼付される。 After the CT image Pcc has been taken, the patient PT moves to the treatment room where the doctor DR performs treatment such as thoracic drainage. The patient PT also has markers MK1 to MK4 attached to their body surface when the doctor DR performs treatment such as thoracic drainage.

 処置を担当する医師DRがARグラス30を装着して、ARグラス30越しに見える患者の実際の画像Pptと、これに重畳表示されたCT画像Pccとを観察して処置を実行する。 The doctor in charge of the treatment wears the AR glasses 30 and performs the treatment while observing the actual image Ppt of the patient seen through the AR glasses 30 and the CT image Pcc superimposed on it.

 ARグラス30に格納された画像表示制御プログラムが実行されると、ARグラス30は、第1の実施の形態で説明したように、マーカ識別部391、座標演算部392、視線誘導部393、器具先端識別部394、及び変換部395として機能する。また、本実施の形態では、ARグラス30は、更に判定部397としても機能する。なお、判定部397の処理は、図7のフローチャートにおいては、例えばステップS26とステップS27との間で実行される。 When the image display control program stored in the AR glasses 30 is executed, the AR glasses 30 function as a marker identification unit 391, a coordinate calculation unit 392, a gaze guidance unit 393, an instrument tip identification unit 394, and a conversion unit 395, as described in the first embodiment. In this embodiment, the AR glasses 30 also function as a determination unit 397. Note that the processing of the determination unit 397 is executed, for example, between step S26 and step S27 in the flowchart of FIG. 7.

 判定部397は、マーカ識別部391で識別されたマーカMK1~MK4の位置Xm1~Xm4と、関心領域画像としてのCT画像Pccの中のマーカMK1~MK4の位置Xm1’~Xm4’と、に基づいて、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、の一致度を判定する。 The determination unit 397 determines the degree of agreement between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured, based on the positions Xm1-Xm4 of the markers MK1-MK4 identified by the marker identification unit 391 and the positions Xm1'-Xm4' of the markers MK1-MK4 in the CT image Pcc as the region of interest image.

 例えば、判定部397は、マーカ識別部391で識別されたマーカの中から特定した特定マーカとしてのマーカMK4と他のマーカMK1~MK3との距離と、関心領域画像としてのCT画像Pccの中のマーカMK4’と他のマーカMK1’~MK3’との距離と、に基づいて、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、の一致度を判定する。 For example, the determination unit 397 determines the degree of correspondence between the patient displayed on the AR glasses 30 and the patient from whom the region of interest image was captured, based on the distance between marker MK4 as a specific marker identified from among the markers identified by the marker identification unit 391 and the other markers MK1-MK3, and the distance between marker MK4' in the CT image Pcc as the region of interest image and the other markers MK1'-MK3'.

 具体的には、判定部397は、図19に示すように、マーカMK4の位置Xm4から他のマーカMK1~MK3の位置Xm1~Xm3までの距離d14、d24、d34を各々算出する。同様に、関心領域画像としてのCT画像Pccの中のマーカMK4’の位置Xm4’から他のマーカMK1’~MK3’の位置Xm1’からXm3’までの距離d14’、d24’、d34’を各々算出する。 Specifically, as shown in FIG. 19, the determination unit 397 calculates distances d14, d24, and d34 from the position Xm4 of the marker MK4 to the positions Xm1 to Xm3 of the other markers MK1 to MK3. Similarly, it calculates distances d14', d24', and d34' from the position Xm4' of the marker MK4' in the CT image Pcc as the region of interest image to the positions Xm1' to Xm3' of the other markers MK1' to MK3'.

 次に、判定部397は、距離d14、d24、d34の平均値daを算出すると共に、距離d14’、d24’、d34’の平均値da’を算出し、平均値daと平均値da’との差分Δd(の絶対値)を算出する。そして、判定部397は、差分Δdが予め定めた閾値以下の場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致すると判定する。一方、差分Δdが予め定めた閾値より大きい場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が不一致であると判定する。この場合、ARグラス30の表示部31に、患者が異なることを示す警告メッセージを表示する。 Next, the determination unit 397 calculates the average value da of the distances d14, d24, and d34, and also calculates the average value da' of the distances d14', d24', and d34', and calculates the difference Δd (absolute value) between the average value da and the average value da'. If the difference Δd is equal to or less than a predetermined threshold, the determination unit 397 determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured match. On the other hand, if the difference Δd is greater than the predetermined threshold, it determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured do not match. In this case, a warning message indicating that the patients are different is displayed on the display unit 31 of the AR glasses 30.

 平均値da、da’は、患者固有の値になると考えられるため、両者の差分が閾値以下であるか否かを判定することにより、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致するか否かを判定することができる。これにより、CT画像を撮影した患者と異なる患者が誤って医師DRの処置を受けてしまうのを防ぐことができる。なお、差分Δdに基づいて一致度を算出し、算出した一致度をARグラス30の表示部31に表示するようにしてもよい。一致度は例えば百分率で表され、差分Δdが0の場合を一致度100%とし、差分Δが大きくなるに従って一致度が低下する計算式を用いて算出することができる。 Since the average values da and da' are considered to be patient-specific, by determining whether the difference between the two is below a threshold value, it is possible to determine whether the patient displayed on the AR glasses 30 matches the patient whose region of interest image was taken. This makes it possible to prevent a patient other than the patient whose CT image was taken from mistakenly receiving treatment from the doctor DR. The degree of agreement may be calculated based on the difference Δd, and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30. The degree of agreement is expressed as a percentage, for example, and can be calculated using a formula in which the degree of agreement is 100% when the difference Δd is 0, and the degree of agreement decreases as the difference Δ increases.

 また、判定部397は、対応する各距離の差分の二乗和が予め定めた閾値以下であるか否かを判定することにより、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致するか否かを判定してもよい。 The determination unit 397 may also determine whether the patient displayed on the AR glasses 30 matches the patient from whom the region of interest image was captured by determining whether the sum of the squares of the differences between the corresponding distances is less than or equal to a predetermined threshold.

 具体的には、判定部397は、距離d14と距離d14’との差分Δd14、距離d24と距離d24’との差分Δd24、距離d34と距離d34’との差分Δd34を各々算出する。 Specifically, the determination unit 397 calculates the difference Δd14 between the distances d14 and d14', the difference Δd24 between the distances d24 and d24', and the difference Δd34 between the distances d34 and d34'.

 そして、差分Δd14、差分Δd24、及び差分Δd34の二乗和、すなわちΔd14+Δd24+Δd34を算出する。そして、算出した二乗和が閾値以下の場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致すると判定する。一方、算出した二乗和が閾値より大きい場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が不一致であると判定する。なお、算出した二乗和に基づいて一致度を算出し、算出した一致度をARグラス30の表示部31に表示するようにしてもよい。一致度は例えば百分率で表され、算出した二乗和が0の場合を一致度100%とし、算出した二乗和が大きくなるに従って一致度が低下する計算式を用いて算出することができる。 Then, the square sum of the difference Δd14, the difference Δd24, and the difference Δd34, that is, Δd14 2 + Δd24 2 + Δd34 2 is calculated. If the calculated square sum is equal to or less than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured match. On the other hand, if the calculated square sum is greater than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured do not match. Note that the degree of agreement may be calculated based on the calculated square sum, and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30. The degree of agreement is expressed, for example, as a percentage, and can be calculated using a formula in which the degree of agreement is 100% when the calculated square sum is 0, and the degree of agreement decreases as the calculated square sum increases.

 また、判定部397は、マーカ識別部391で識別されたマーカの中から特定した特定マーカとしてのマーカMK4と他のマーカMK1~MK3とを結ぶ線分同士が成す角度と、関心領域画像としてのCT画像Pccの中のマーカMK4’と他のマーカMK1’~MK3’とを結ぶ線分同士が成す角度と、に基づいて、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、の一致度を判定してもよい。 The determination unit 397 may also determine the degree of correspondence between the patient displayed on the AR glasses 30 and the patient from whom the region of interest image was captured, based on the angle formed by the line segments connecting marker MK4 as a specific marker identified from among the markers identified by the marker identification unit 391 to the other markers MK1 to MK3, and the angle formed by the line segments connecting marker MK4' in the CT image Pcc as the region of interest image to the other markers MK1' to MK3'.

 具体的には、図20に示すように、判定部397は、マーカ識別部391で識別されたマーカMK4の位置Xm4とマーカMK1の位置Xm1とを結ぶ線分L14と、マーカMK4の位置Xm4とマーカMK2の位置Xm2とを結ぶ線分L24と、が成す角度θ12を算出する。また、線分L14と、マーカMK4の位置Xm4とマーカMK3の位置Xm3とを結ぶ線分L34と、が成す角度θ13を算出する。また、線分L34と、マーカMK4の位置Xm4とマーカMK2の位置Xm2とを結ぶ線分L24と、が成す角度θ23を算出する。 Specifically, as shown in FIG. 20, the determination unit 397 calculates the angle θ12 between a line segment L14 connecting the position Xm4 of the marker MK4 identified by the marker identification unit 391 to the position Xm1 of the marker MK1, and a line segment L24 connecting the position Xm4 of the marker MK4 to the position Xm2 of the marker MK2. It also calculates the angle θ13 between the line segment L14 and a line segment L34 connecting the position Xm4 of the marker MK4 to the position Xm3 of the marker MK3. It also calculates the angle θ23 between the line segment L34 and a line segment L24 connecting the position Xm4 of the marker MK4 to the position Xm2 of the marker MK2.

 同様に、判定部397は、CT画像Pccの中のマーカMK4’の位置Xm4’とCT画像Pccの中のマーカMK1の位置Xm1’とを結ぶ線分L14’と、マーカMK4の位置Xm4’とCT画像Pccの中のマーカMK2’の位置Xm2’とを結ぶ線分L24’と、が成す角度θ12’を算出する。また、線分L14’と、マーカMK4’の位置Xm4’とCT画像Pccの中のマーカMK3’の位置Xm3と’を結ぶ線分L34’と、が成す角度θ13’を算出する。また、線分L34’と、マーカMK4の位置Xm4’とマーカMK2’の位置Xm2’とを結ぶ線分L24’と、が成す角度θ23’を算出する。 Similarly, the determination unit 397 calculates the angle θ12' between the line segment L14' connecting the position Xm4' of the marker MK4' in the CT image Pcc to the position Xm1' of the marker MK1 in the CT image Pcc, and the line segment L24' connecting the position Xm4' of the marker MK4 to the position Xm2' of the marker MK2' in the CT image Pcc. It also calculates the angle θ13' between the line segment L14' and the line segment L34' connecting the position Xm4' of the marker MK4' to the position Xm3' of the marker MK3' in the CT image Pcc. It also calculates the angle θ23' between the line segment L34' and the line segment L24' connecting the position Xm4' of the marker MK4 to the position Xm2' of the marker MK2'.

 次に、判定部397は、角度θ12、θ13、θ34の平均値θaを算出すると共に、角度θ12’、θ13’、θ34’の平均値θa’を算出し、平均値θaと平均値θa’との差分Δθ(の絶対値)を算出する。そして、判定部397は、差分Δθが予め定めた閾値以下の場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致すると判定する。一方、差分Δθが予め定めた閾値より大きい場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が不一致であると判定する。この場合、ARグラス30の表示部31に、患者が異なることを示す警告メッセージを表示する。 Next, the determination unit 397 calculates the average value θa of the angles θ12, θ13, and θ34, and also calculates the average value θa' of the angles θ12', θ13', and θ34', and calculates the difference Δθ (absolute value) between the average value θa and the average value θa'. If the difference Δθ is equal to or less than a predetermined threshold, the determination unit 397 determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured match. On the other hand, if the difference Δθ is greater than the predetermined threshold, it determines that the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured do not match. In this case, a warning message indicating that the patients are different is displayed on the display unit 31 of the AR glasses 30.

 平均値θa、θa’は、患者固有の値になると考えられるため、両者の差分が閾値以下であるか否かを判定することにより、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致するか否かを判定することができる。これにより、CT画像を撮影した患者と異なる患者が誤って医師DRの処置を受けてしまうのを防ぐことができる。なお、差分Δθに基づいて一致度を算出し、算出した一致度をARグラス30の表示部31に表示するようにしてもよい。一致度は例えば百分率で表され、差分Δθが0の場合を一致度100%とし、差分θが大きくなるに従って一致度が低下する計算式を用いて算出することができる。 Since the average values θa, θa' are considered to be patient-specific, by determining whether the difference between the two is below a threshold value, it is possible to determine whether the patient displayed on the AR glasses 30 matches the patient whose region of interest image was taken. This makes it possible to prevent a patient other than the patient whose CT image was taken from mistakenly receiving treatment from the doctor DR. The degree of agreement may be calculated based on the difference Δθ, and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30. The degree of agreement is expressed as a percentage, for example, and can be calculated using a formula in which the degree of agreement is 100% when the difference Δθ is 0, and decreases as the difference θ increases.

 また、判定部397は、対応する各角度の差分の二乗和が予め定めた閾値以下であるか否かを判定することにより、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、の一致度を判定してもよい。 The determination unit 397 may also determine the degree of match between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured by determining whether the sum of the squares of the differences between the corresponding angles is equal to or less than a predetermined threshold value.

 具体的には、判定部397は、角度θ12と角度θ12’との差分Δθ12、角度θ13と角度θ13’との差分Δθ13、角度θ23と角度θ23’との差分Δθ34を各々算出する。 Specifically, the determination unit 397 calculates the difference Δθ12 between the angles θ12 and θ12', the difference Δθ13 between the angles θ13 and θ13', and the difference Δθ34 between the angles θ23 and θ23'.

 そして、差分Δθ12、差分Δθ13、及び差分Δθ23の二乗和、すなわちΔθ12+Δθ13+Δθ23を算出する。そして、算出した二乗和が閾値以下の場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致すると判定する。一方、算出した二乗和が閾値より大きい場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が不一致であると判定する。また、算出した二乗和に基づいて一致度を算出し、算出した一致度をARグラス30の表示部31に表示するようにしてもよい。一致度は例えば百分率で表され、算出した二乗和が0の場合を一致度100%とし、算出した二乗和が大きくなるに従って一致度が低下する計算式を用いて算出することができる。 Then, the square sum of the difference Δθ12, the difference Δθ13, and the difference Δθ23, that is, Δθ12 2 +Δθ13 2 +Δθ23 2 is calculated. If the calculated square sum is equal to or less than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured match. On the other hand, if the calculated square sum is greater than the threshold, it is determined that the patient displayed on the AR glasses 30 and the patient whose region of interest image is captured do not match. In addition, the degree of agreement may be calculated based on the calculated square sum, and the calculated degree of agreement may be displayed on the display unit 31 of the AR glasses 30. The degree of agreement is expressed, for example, as a percentage, and can be calculated using a calculation formula in which the degree of agreement is 100% when the calculated square sum is 0, and the degree of agreement decreases as the calculated square sum increases.

 なお、上記ではマーカMK4を特定マーカとしたが、マーカMK1~MK3の何れかを特定マーカとしてもよい。 In the above, marker MK4 is set as the specific marker, but any of markers MK1 to MK3 may be set as the specific marker.

 また、判定部397は、マーカ識別部391で識別された各マーカM1~MK4の位置と、当該各マーカMK1~M4の位置に対応する、関心領域画像としてのCT画像Pccの中の各マーカMK1’~MK4’の位置と、の間の各々の距離に基づいて、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、の一致度を判定してもよい。 The determination unit 397 may also determine the degree of match between the patient displayed on the AR glasses 30 and the patient from whom the region of interest image was captured, based on the distance between the position of each marker M1 to MK4 identified by the marker identification unit 391 and the position of each marker MK1' to MK4' in the CT image Pcc as the region of interest image, which corresponds to the position of each marker MK1 to M4.

 例えば、CT画像Pccの中のマーカMK1’~MK4’の座標系Cx1における座標Xm1’~Xm4’を、アフィン変換等によりARグラス30の座標系Cx2に変換し、変換後の座標Xm1”~Xm4”と、ARグラス30において識別されたマーカMKの座標Xm1~Xm4と、に基づいて、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、一致度を判定する。 For example, the coordinates Xm1' to Xm4' in the coordinate system Cx1 of the markers MK1' to MK4' in the CT image Pcc are transformed into the coordinate system Cx2 of the AR glasses 30 by affine transformation or the like, and the degree of match between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured is determined based on the transformed coordinates Xm1" to Xm4" and the coordinates Xm1 to Xm4 of the markers MK identified in the AR glasses 30.

 具体的には、例えば座標Xm1”と座標Xm1との距離D1、座標Xm2”と座標Xm2との距離D2、座標Xm3”と座標Xm3との距離D3、及び座標Xm4”と座標Xm4との距離D4の二乗和を算出し、算出した二乗和が閾値以下の場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致すると判定し、算出した二乗和が閾値より大きい場合は、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が不一致であると判定する。 Specifically, for example, the sum of squares of the distance D1 between coordinate Xm1" and coordinate Xm1, the distance D2 between coordinate Xm2" and coordinate Xm2, the distance D3 between coordinate Xm3" and coordinate Xm3, and the distance D4 between coordinate Xm4" and coordinate Xm4 is calculated, and if the calculated sum of squares is equal to or less than a threshold, it is determined that the patient displayed on the AR glasses 30 matches the patient whose region of interest image was captured, and if the calculated sum of squares is greater than the threshold, it is determined that the patient displayed on the AR glasses 30 does not match the patient whose region of interest image was captured.

 また、アフィン変換等を行わずに、CT画像Pccの中のマーカMK1’~MK4’と、ARグラス30において識別されたマーカMKの座標Xm1~Xm4と、に基づいて、上記と同様に二乗和を算出し、算出した二乗和に基づいてARグラス30に表示された患者と、関心領域画像が撮像された患者と、の一致度を判定してもよい。なお、算出した二乗和に基づいて一致度を算出し、算出した一致度をARグラス30の表示部31に表示するようにしてもよい。一致度は例えば百分率で表され、算出した二乗和が0の場合を一致度100%とし、算出した二乗和が大きくなるに従って一致度が低下する計算式を用いて算出することができる。 In addition, without performing affine transformation or the like, the sum of squares may be calculated in the same manner as above based on the markers MK1'-MK4' in the CT image Pcc and the coordinates Xm1-Xm4 of the markers MK identified in the AR glasses 30, and the degree of match between the patient displayed on the AR glasses 30 and the patient whose region of interest image was captured may be determined based on the calculated sum of squares. The degree of match may be calculated based on the calculated sum of squares and displayed on the display unit 31 of the AR glasses 30. The degree of match is expressed as a percentage, for example, and can be calculated using a formula in which the degree of match is 100% when the calculated sum of squares is 0, and the degree of match decreases as the calculated sum of squares increases.

 以上説明したように、第6の実施の形態のシステムによれば、ARグラス30に表示された患者と、関心領域画像が撮像された患者と、が一致するか否かを判定することができる。これにより、CT画像を撮影した患者と異なる患者が誤って医師DRの処置を受けてしまうのを防ぐことができる。 As described above, the system of the sixth embodiment can determine whether the patient displayed on the AR glasses 30 matches the patient whose region of interest image was captured. This can prevent a patient other than the patient whose CT image was captured from mistakenly receiving treatment from the doctor DR.

 なお、本実施の形態では、マーカMK4を患者特定用として用いた場合について説明したが、患者PTの実際の画像Pptと、CT画像Pccとを重畳表示する際に、精度良く位置合わせするためのマーカとして用いてもよい。 In this embodiment, the marker MK4 is used to identify the patient, but it may also be used as a marker for accurate alignment when the actual image Ppt of the patient PT and the CT image Pcc are superimposed.

 なお、日本国特許出願第2023-182769号及び日本国特許出願第2024-052353号の開示は、その全体が参照により本明細書に取り込まれる。また、本明細書に記載された全ての文献、特許出願、及び技術規格は、個々の文献、特許出願、及び技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。 The disclosures of Japanese Patent Application Nos. 2023-182769 and 2024-052353 are incorporated herein by reference in their entirety. In addition, all documents, patent applications, and technical standards described herein are incorporated herein by reference to the same extent as if each individual document, patent application, and technical standard was specifically and individually indicated to be incorporated by reference.

Claims (18)

 患者の体表面にマーカを貼付して前記患者の関心領域を前記マーカと共に撮像して得られた関心領域画像を取得する画像取得部と、
 ユーザの視野に設けられる画像表示部において、ユーザの視線を前記マーカの1つに誘導する視線誘導部と、
 前記視線誘導部により誘導された前記ユーザの視線に対応する領域において前記マーカを識別するマーカ識別部と、
 前記関心領域画像の中の前記マーカを前記マーカ識別部で識別された前記マーカに位置合わせするための変換を前記関心領域画像について行う変換部と
 を備え、
 前記画像表示部には、前記変換部で変換された前記関心領域画像が表示される
ことを特徴とする画像表示装置。
an image acquisition unit that acquires an image of a region of interest obtained by attaching a marker to a body surface of a patient and capturing an image of a region of interest of the patient together with the marker;
a line-of-sight guidance unit that guides the user's line of sight to one of the markers in an image display unit provided in the user's field of vision;
a marker identification unit that identifies the marker in an area corresponding to the user's line of sight guided by the line of sight guidance unit;
a conversion unit that performs a conversion on the region of interest image to align the marker in the region of interest image with the marker identified by the marker identification unit,
The image display device according to claim 1, wherein the image display unit displays the region of interest image converted by the conversion unit.
 請求項1の画像表示装置において、
 前記マーカは、前記患者の関心領域を囲い、且つ前記患者の呼吸による位置の変化量が基準値に比べて小さい位置に、少なくとも3個配置される、請求項1に記載の画像表示装置。
2. The image display device of claim 1,
The image display device according to claim 1 , wherein at least three of the markers are arranged at positions that surround a region of interest of the patient and where an amount of change in position due to respiration of the patient is smaller than a reference value.
 前記マーカは、前記患者の呼吸による位置の変化量が前記基準値に比べて大きい位置に配置される呼吸検知用マーカを更に備え、
 前記画像表示部には、前記呼吸検知用マーカの動きに応じて、前記関心領域画像が変更して表示される、請求項2に記載の画像表示装置。
The marker further includes a respiration detection marker arranged at a position where a change in position due to respiration of the patient is larger than the reference value,
The image display device according to claim 2 , wherein the image of the region of interest displayed on the image display unit is changed in accordance with the movement of the marker for respiration detection.
 前記呼吸検知用マーカの動きに応じて、複数枚の前記関心領域画像から内挿処理して新たな関心領域画像を生成する請求項3に記載の画像表示装置。 The image display device according to claim 3, which generates a new region of interest image by performing an interpolation process from a plurality of region of interest images in response to the movement of the respiratory detection marker.  前記画像取得部は、前記患者の呼吸の異なる複数の段階について複数の画像を取得する、請求項1~3のいずれか1項に記載の画像表示装置。 The image display device according to any one of claims 1 to 3, wherein the image acquisition unit acquires a plurality of images for a plurality of different stages of the patient's breathing.  前記マーカは、前記少なくとも3個配置されるマーカと異なる位置に配置されるマーカを含んで少なくとも4個配置され、
 前記関心領域画像は、前記関心領域を前記少なくとも4個のマーカと共に撮像して得られた画像であり、
 前記マーカ識別部で識別された各マーカの位置と、前記関心領域画像の中の各マーカの位置と、に基づいて、前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定する判定部を備えた、請求項2に記載の画像表示装置。
At least four markers are arranged, including a marker arranged at a position different from the at least three markers arranged,
the region of interest image is an image obtained by imaging the region of interest together with the at least four markers;
3. The image display device according to claim 2, further comprising a determination unit that determines a degree of match between a patient displayed on the image display unit and a patient from whom the region of interest image was captured, based on a position of each marker identified by the marker identification unit and a position of each marker in the region of interest image.
 前記判定部は、前記少なくとも4個のマーカの中から特定した特定マーカと他のマーカとの距離と、前記関心領域画像の中の前記特定マーカと他のマーカとの距離と、に基づいて、前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定する、請求項6に記載の画像表示装置。 The image display device according to claim 6, wherein the determination unit determines the degree of match between the patient displayed on the image display unit and the patient from whom the region of interest image was captured, based on the distance between the specific marker identified from among the at least four markers and the other markers, and the distance between the specific marker in the region of interest image and the other markers.  前記判定部は、前記少なくとも4個のマーカの中から特定した特定マーカと他のマーカとを結ぶ線分同士が成す角度と、前記関心領域画像の中の前記特定マーカと他のマーカとを結ぶ線分同士が成す角度と、に基づいて、前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定する、請求項6に記載の画像表示装置。 The image display device according to claim 6, wherein the determination unit determines the degree of match between the patient displayed on the image display unit and the patient from whom the region of interest image was captured, based on the angle formed by the line segments connecting the specific marker identified from among the at least four markers to the other markers and the angle formed by the line segments connecting the specific marker in the region of interest image to the other markers.  前記判定部は、前記マーカ識別部で識別された各マーカの位置と、当該各マーカの位置に対応する、前記関心領域画像の中の各マーカの位置と、の間の各々の距離に基づいて、
前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定する、請求項6に記載の画像表示装置。
The determination unit determines, based on a distance between a position of each marker identified by the marker identification unit and a position of each marker in the region of interest image corresponding to the position of each marker,
The image display device according to claim 6 , further comprising: a determination unit configured to determine a degree of coincidence between the patient displayed on the image display unit and the patient from whom the region of interest image was captured.
 患者の体表面にマーカを貼付して前記患者の関心領域を前記マーカと共に撮像して得られた関心領域画像を取得するステップと、
 前記マーカを貼付された前記患者と共に前記関心領域画像を画像表示部に表示させるステップと、
 前記画像表示部において、ユーザの視線を前記マーカの1つに誘導するステップと、
 誘導された前記ユーザの視線に対応する領域において前記マーカを識別するステップと、前記関心領域画像の中の前記マーカを、識別された前記マーカに位置合わせするための変換を前記関心領域画像について行うステップと、
 前記変換の後の前記関心領域画像を前記画像表示部に表示させるステップと
をコンピュータに実行させることが可能に構成されたことを特徴とする画像表示制御プログラム。
A step of attaching a marker to a body surface of a patient and capturing an image of a region of interest of the patient together with the marker;
displaying the region of interest image on an image display unit together with the patient to which the marker is attached;
guiding a user's gaze to one of the markers on the image display unit;
identifying the marker in a region corresponding to the directed line of sight of the user; and performing a transformation on the region of interest image to align the marker in the region of interest image with the identified marker;
and displaying the converted image of the region of interest on the image display unit.
 前記マーカは、前記患者の関心領域を囲い、且つ前記患者の呼吸による位置の変化量が基準値に比べて小さい位置に、少なくとも3個配置される、請求項10に記載の画像表示制御プログラム。 The image display control program according to claim 10, wherein at least three of the markers are placed in positions that surround the patient's region of interest and where the amount of change in position due to the patient's breathing is smaller than a reference value.  前記マーカは、前記患者の呼吸による位置の変化量が前記基準値に比べて大きい位置に配置される呼吸検知用マーカを更に備え、
 前記画像表示部には、前記呼吸検知用マーカの動きに応じて、前記関心領域画像が変更して表示される、請求項11に記載の画像表示制御プログラム。
The marker further includes a respiration detection marker arranged at a position where a change in position due to respiration of the patient is larger than the reference value,
The image display control program according to claim 11 , wherein the image of the region of interest displayed on the image display unit is changed in accordance with a movement of the marker for respiration detection.
 前記呼吸検知用マーカの動きに応じて、複数枚の前記関心領域画像から内挿処理して新たな関心領域画像を生成する請求項12に記載の画像表示制御プログラム。 The image display control program according to claim 12, which generates a new region of interest image by performing an interpolation process from a plurality of region of interest images in response to the movement of the respiratory detection marker.  前記関心領域画像を取得するステップは、前記患者の呼吸の異なる複数の段階について複数の画像を取得する、請求項10~12のいずれか1項に記載の画像表示制御プログラム。 The image display control program according to any one of claims 10 to 12, wherein the step of acquiring an image of the region of interest includes acquiring a plurality of images for a plurality of different stages of the patient's breathing.  前記マーカは、前記少なくとも3個配置されるマーカと異なる位置に配置されるマーカを含んで少なくとも4個配置され、
 前記関心領域画像は、前記関心領域を前記少なくとも4個のマーカと共に撮像して得られた画像であり、
 前記マーカを識別するステップで識別された各マーカの位置と、前記関心領域画像の中の各マーカの位置と、に基づいて、前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定するステップを備えた、請求項11に記載の画像表示制御プログラム。
At least four markers are arranged, including a marker arranged at a position different from the at least three markers arranged,
the region of interest image is an image obtained by imaging the region of interest together with the at least four markers;
12. The image display control program according to claim 11, further comprising a step of determining a degree of correspondence between the patient displayed on the image display unit and the patient from whom the region of interest image was captured, based on the positions of each marker identified in the marker identifying step and the positions of each marker in the region of interest image.
 前記判定するステップは、前記少なくとも4個のマーカの中から特定した特定マーカと他のマーカとの距離と、前記関心領域画像の中の前記特定マーカと他のマーカとの距離と、に基づいて、前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定する、請求項15に記載の画像表示制御プログラム。 The image display control program according to claim 15, wherein the determining step determines the degree of match between the patient displayed on the image display unit and the patient from whom the region of interest image was captured, based on the distance between a specific marker identified from among the at least four markers and the other markers, and the distance between the specific marker and the other markers in the region of interest image.  前記判定するステップは、前記少なくとも4個のマーカの中から特定した特定マーカと他のマーカとを結ぶ線分同士が成す角度と、前記関心領域画像の中の前記特定マーカと他のマーカとを結ぶ線分同士が成す角度と、に基づいて、前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定する、請求項15に記載の画像表示制御プログラム。 The image display control program according to claim 15, wherein the determining step determines the degree of match between the patient displayed on the image display unit and the patient from whom the region of interest image was captured, based on the angle formed by the line segments connecting the specific marker identified from among the at least four markers to the other markers and the angle formed by the line segments connecting the specific marker in the region of interest image to the other markers.  前記判定するステップは、前記マーカを識別するステップで識別された各マーカの位置と、当該各マーカの位置に対応する、前記関心領域画像の中の各マーカの位置と、の間の各々の距離に基づいて、前記画像表示部に表示された患者と、前記関心領域画像が撮像された患者と、の一致度を判定する、請求項15に記載の画像表示制御プログラム。 The image display control program according to claim 15, wherein the determining step determines the degree of match between the patient displayed on the image display unit and the patient from whom the region of interest image was captured, based on the respective distances between the positions of the markers identified in the marker identifying step and the positions of the markers in the region of interest image that correspond to the positions of the markers.
PCT/JP2024/034939 2023-10-24 2024-09-30 Image display device and image display control program Pending WO2025088983A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2025553033A JPWO2025088983A1 (en) 2023-10-24 2024-09-30

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2023182769 2023-10-24
JP2023-182769 2023-10-24
JP2024-052353 2024-03-27
JP2024052353 2024-03-27

Publications (1)

Publication Number Publication Date
WO2025088983A1 true WO2025088983A1 (en) 2025-05-01

Family

ID=95515641

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/034939 Pending WO2025088983A1 (en) 2023-10-24 2024-09-30 Image display device and image display control program

Country Status (2)

Country Link
JP (1) JPWO2025088983A1 (en)
WO (1) WO2025088983A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150289848A1 (en) * 2014-04-14 2015-10-15 Korea Advanced Institute Of Science And Technology Method and apparatus for registration of medical images
US20170165028A1 (en) * 2014-03-12 2017-06-15 Stichting Katholieke Universiteit Anatomical image projection system
JP2017167965A (en) * 2016-03-17 2017-09-21 東芝メディカルシステムズ株式会社 Image processing apparatus and medical information management system
US20180221566A1 (en) * 2017-02-08 2018-08-09 Veran Medical Technologies, Inc. Localization needle
JP2019030492A (en) * 2017-08-08 2019-02-28 コニカミノルタ株式会社 X-ray image processing apparatus and X-ray image processing method
US20190108645A1 (en) * 2016-04-21 2019-04-11 Elbit Systems Ltd. Method and system for registration verification
JP2021097731A (en) * 2019-12-19 2021-07-01 キヤノンメディカルシステムズ株式会社 Medical image processing device and medical image processing system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436993B1 (en) * 2015-04-17 2016-09-06 Clear Guide Medical, Inc System and method for fused image based navigation with late marker placement

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170165028A1 (en) * 2014-03-12 2017-06-15 Stichting Katholieke Universiteit Anatomical image projection system
US20150289848A1 (en) * 2014-04-14 2015-10-15 Korea Advanced Institute Of Science And Technology Method and apparatus for registration of medical images
JP2017167965A (en) * 2016-03-17 2017-09-21 東芝メディカルシステムズ株式会社 Image processing apparatus and medical information management system
US20190108645A1 (en) * 2016-04-21 2019-04-11 Elbit Systems Ltd. Method and system for registration verification
US20180221566A1 (en) * 2017-02-08 2018-08-09 Veran Medical Technologies, Inc. Localization needle
JP2019030492A (en) * 2017-08-08 2019-02-28 コニカミノルタ株式会社 X-ray image processing apparatus and X-ray image processing method
JP2021097731A (en) * 2019-12-19 2021-07-01 キヤノンメディカルシステムズ株式会社 Medical image processing device and medical image processing system

Also Published As

Publication number Publication date
JPWO2025088983A1 (en) 2025-05-01

Similar Documents

Publication Publication Date Title
JP6987893B2 (en) General-purpose devices and methods for integrating diagnostic trials into real-time treatment
TWI741359B (en) Mixed reality system integrated with surgical navigation system
CN102811655B (en) System and device for supporting endoscopic observation
KR102014355B1 (en) Method and apparatus for calculating location information of surgical device
Sielhorst et al. Advanced medical displays: A literature review of augmented reality
JP7469120B2 (en) Robotic surgery support system, operation method of robotic surgery support system, and program
JP5380348B2 (en) System, method, apparatus, and program for supporting endoscopic observation
US20200113636A1 (en) Robotically-assisted surgical device, robotically-assisted surgery method, and system
TW201717837A (en) Enhanced reality navigation
US11771508B2 (en) Robotically-assisted surgical device, robotically-assisted surgery method, and system
US11779412B2 (en) Robotically-assisted surgical device, robotically-assisted surgery method, and system
JP2019162339A (en) Surgery supporting system and display method
KR20190134968A (en) Robotic System for Intraluminal Tissue Navigation Compensating Physiological Noise
US10383692B1 (en) Surgical instrument guidance system
JP5934070B2 (en) Virtual endoscopic image generating apparatus, operating method thereof, and program
JP2023502927A (en) Visualization system for use in a surgical environment
CN111658142A (en) MR-based focus holographic navigation method and system
US12327354B2 (en) Apparatus, system and method for supporting a procedure of an imaging examination
TWI679960B (en) Surgical instrument guidance system
WO2025088983A1 (en) Image display device and image display control program
US20220286801A1 (en) System and method for audio signal placement and projecton
KR102694309B1 (en) System for medical information visualization based on augmented reality using landmarks and method thereof
US20210236213A1 (en) Systems and methods for determining an arrangement of explanted tissue and for displaying tissue information
US20240000424A1 (en) Augmented reality for ultrasound exams at the point-of-care in combination with mechanical ventilation
EP4555964A1 (en) Projection apparatus for analysing objects of interest on or over a surface of a subject

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24882100

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2025553033

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2025553033

Country of ref document: JP