WO2019164275A1 - Procédé et dispositif pour reconnaître la position d'un instrument chirurgical et caméra - Google Patents
Procédé et dispositif pour reconnaître la position d'un instrument chirurgical et caméra Download PDFInfo
- Publication number
- WO2019164275A1 WO2019164275A1 PCT/KR2019/002093 KR2019002093W WO2019164275A1 WO 2019164275 A1 WO2019164275 A1 WO 2019164275A1 KR 2019002093 W KR2019002093 W KR 2019002093W WO 2019164275 A1 WO2019164275 A1 WO 2019164275A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgical
- camera
- information
- body model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
Definitions
- the present invention relates to a position recognition method and apparatus for a surgical instrument and a camera.
- Laparoscopic surgery refers to surgery performed by medical staff to see and touch the part to be treated.
- Minimally invasive surgery is also known as keyhole surgery, and laparoscopic surgery and robotic surgery are typical.
- laparoscopic surgery a small hole is made in a necessary part without opening, and a laparoscopic with a special camera is attached and a surgical tool is inserted into the body and observed through a video monitor.
- Microsurgery is performed using a laser or a special instrument.
- robot surgery is to perform minimally invasive surgery using a surgical robot.
- radiation surgery refers to surgical treatment with radiation or laser light outside the body.
- the problem to be solved by the present invention is to provide a method and apparatus for calculating a camera position using an actual surgical image.
- the problem to be solved by the present invention is to provide a method and apparatus for the surgical tool to calculate the position information.
- the problem to be solved by the present invention is to provide a method and apparatus for providing position information of a surgical tool.
- the problem to be solved by the present invention is to provide a surgical image-based camera position providing method and apparatus.
- a reference object is obtained by acquiring a reference object from an actual surgical image photographed by a camera entering the body of a surgical subject. Setting a position, calculating a position change amount of the camera as the camera moves, and calculating a current position of the camera based on the position change amount of the camera with respect to the reference position.
- Method for calculating position information of a surgical tool performed by a computer the body of the surgical subject based on the sensing information obtained from the sensing device attached to the surgical tool inserted into the internal space of the subject Calculating the position of the first point of the surgical tool with respect to the external space, and the characteristics of the surgical tool based on the position of the first point of the surgical tool through a virtual body model generated in accordance with the physical state of the surgical subject; Calculating a second point position of the surgical tool with respect to the internal space of the surgical subject by reflecting information, and the actual body of the surgical subject based on the second point position of the surgical tool with respect to the virtual body model And providing position information of the surgical tool in an internal space, wherein the first point position of the surgical tool is the number
- the position of the surgical tool is a specific point located on the outer body space
- the position of the second point of the surgical tool may be a position of a specific point located on the inner body space in the surgical tool.
- a method for providing location information of a surgical tool is performed by a computer according to an embodiment of the present disclosure, obtaining coordinate information of a surgical robot including a surgical tool based on a reference point of a surgical subject, and a physical state of the surgical subject. Matching the coordinate information of the virtual body model generated according to the coordinate information of the surgical robot, and the position of the surgical tool in the virtual body model corresponding to the position of the surgical tool obtained from the coordinate information of the surgical robot; Comprising the step of calculating.
- Surgical image-based camera position providing method performed by a computer according to an embodiment of the present invention, obtaining a surgical image taken as the camera enters the body and moves the surgical path, based on the surgical image Deriving camera position information on the standard body model, and providing camera position information on the virtual body model of the current surgical subject based on the camera position information on the standard body model.
- the position of the surgical site and the surgical tool can be obtained by calculating the current position of the camera in real time during the actual surgery.
- the present invention by detecting the camera viewpoint of the surgical image it is possible to effectively calculate the camera position information on the three-dimensional image seen in the same viewpoint in the three-dimensional modeled body model.
- the camera coordinate information in the three-dimensional space can be efficiently provided with only the surgical image.
- the present invention by deriving the camera position information for a point corresponding to the reference position on the surgical path, it is possible to provide a point to be located before the operation of the camera.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- FIG. 2 is a flowchart illustrating a camera position calculation method using an actual surgical image according to an embodiment of the present invention.
- FIG. 3 is a flowchart illustrating a process of setting a reference position for a camera according to an embodiment of the present invention.
- FIG. 4 is a flowchart illustrating a process of calculating a position change amount of a camera according to an embodiment of the present invention.
- FIG. 5 is a view showing the position of the camera derived according to an embodiment of the present invention on a coordinate plane.
- FIG. 6 is a diagram schematically illustrating a configuration of an apparatus 300 for performing a camera position calculating method using an actual surgical image according to an embodiment of the present invention.
- FIG. 7 is a schematic diagram of a system capable of performing robotic surgery in accordance with one embodiment of the present invention.
- FIG. 8 is a flowchart schematically illustrating a method of generating a virtual body model according to an embodiment of the present invention.
- FIG. 9 is a flowchart illustrating a method of calculating position information of a surgical tool according to an embodiment of the present invention.
- FIG. 10 is a diagram schematically showing the configuration of an apparatus 300 for performing a method for calculating position information of a surgical tool according to an embodiment of the present invention.
- FIG. 11 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- FIG. 12 is a flowchart schematically illustrating a method of generating a virtual body model according to an embodiment of the present invention.
- FIG. 13 is a flowchart illustrating a method of providing location information of a surgical tool according to an embodiment of the present invention.
- FIG. 14 is a view schematically showing the configuration of an apparatus 300 for performing a method for providing position information of a surgical tool according to an embodiment of the present invention.
- 15 is a flowchart illustrating a method for providing a surgical image-based camera position according to an embodiment of the present invention.
- 16 is a diagram illustrating a surgical path for each patient for a specific surgery.
- 17 is a view for explaining an example of a process of deriving the camera position information on the standard body model from the surgical image according to an embodiment of the present invention.
- 18 is a diagram for explaining an example of a process of converting coordinate information of a virtual body model of a patient into coordinate information of a standard body model according to an embodiment of the present invention.
- FIG. 19 is a diagram illustrating a process of converting coordinate information of a standard body model into coordinate information of a virtual body model for a current surgery subject, according to an exemplary embodiment.
- 20 and 21 are diagrams for explaining a process of deriving position information on a standard body model from a surgical image through learning according to an embodiment of the present invention.
- 22 is a diagram schematically illustrating a system capable of performing robot surgery according to an embodiment of the present invention.
- FIG. 23 is a diagram schematically illustrating a configuration of an apparatus 500 for performing a surgical image-based camera position providing method according to an embodiment of the present invention.
- a “part” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “part” or “module” plays certain roles. However, “part” or “module” is not meant to be limited to software or hardware.
- the “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors.
- a “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and “parts” or “modules” may be combined into smaller numbers of components and “parts” or “modules” or into additional components and “parts” or “modules”. Can be further separated.
- image may mean multidimensional data composed of discrete image elements (eg, pixels in a 2D image and voxels in a 3D image).
- the image may include a medical image of the object obtained by the CT imaging apparatus.
- an "object” may be a person or an animal, or part or all of a person or an animal.
- the subject may include at least one of organs such as the liver, heart, uterus, brain, breast, abdomen, and blood vessels.
- a "user” may be a doctor, a nurse, a clinical pathologist, a medical imaging professional, or the like, and may be a technician who repairs a medical device, but is not limited thereto.
- medical image data is a medical image photographed by a medical imaging apparatus, and includes all medical images that can be implemented as a three-dimensional model of the body of an object.
- Medical image data may include computed tomography (CT) images, magnetic resonance imaging (MRI), positron emission tomography (PET) images, and the like.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- the term "virtual body model” refers to a model generated according to the actual patient's body based on medical image data.
- the “virtual body model” may be generated by modeling medical image data in three dimensions as it is, or may be corrected as in actual surgery after modeling.
- virtual surgery data refers to data including rehearsal or simulation actions performed on a virtual body model.
- the “virtual surgery data” may be image data for which rehearsal or simulation has been performed on the virtual body model in the virtual space, or may be data recorded for a surgical operation performed on the virtual body model.
- actual surgery data refers to data obtained by performing a surgery by an actual medical staff.
- Real surgery data may be image data photographing the surgical site in the actual surgical procedure, or may be data recorded for the surgical operation performed in the actual surgical procedure.
- a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
- a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
- a head mounted display (HMD) device includes a computing function
- the HMD device may be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
- surgical robot 34 includes imaging device 36 and surgical instrument 38.
- the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
- the server 100 is a computing device including at least one processor and a communication unit.
- the controller 30 includes a computing device including at least one processor and a communication unit.
- the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image captured by the imaging device 36 is displayed on the display 32.
- surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
- Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
- the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
- the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
- the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
- the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
- the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
- the surgical image obtained by the imaging device 36 is transmitted to the controller 30.
- FIG. 2 is a flowchart illustrating a camera position calculation method using an actual surgical image according to an embodiment of the present invention.
- each step shown in FIG. 2 is performed in time series in the server 100 or the controller 30 shown in FIG. 1.
- each step is described as being performed by a computer, but the performing agent of each step is not limited to a specific device, and all or part thereof may be performed by the server 20 or the controller 30. Can be.
- a computer acquires a reference object from an actual surgery image photographed by a camera that enters into the body of a surgery subject.
- S100 a reference position with respect to the camera
- S200 calculating a position change amount of the camera as the camera moves
- a current position of the camera based on the position change amount of the camera with respect to the reference position
- the computer may acquire a reference object from an actual surgery image photographed by a camera that enters the body of the surgery subject and set a reference position with respect to the camera (S100).
- the actual surgery image refers to data obtained by the actual medical staff performing the surgery, for example, may be an image of the actual surgery scene actually performed by the surgical robot 34.
- the actual surgical image may be a stereoscopic 3D image, and thus the actual surgical image may be an image having a three-dimensional stereoscopic sense, that is, a depth. Therefore, it is possible to accurately grasp the position of the surgical tool in the three-dimensional space through the depth map of the actual surgical image.
- the camera may be a device capable of capturing 3D images, for example, may be a stereo camera. Accordingly, the actual surgical image may be captured as a 3D image by a stereo camera.
- the depth information of the actual surgical image does not need to have a depth value for every pixel of the entire screen area, and a depth value may be obtained for a minimum reference point that can sufficiently express geometrical features. For example, if there is no reference 3D data model (eg, virtual body model), a depth value may be required for all pixels, but if there is already a 3D data model (eg, virtual body model), depth is required for all pixels. It does not have to have a value.
- the computer may acquire an image taken by a camera entering the patient's body as an actual surgical image.
- the actual surgery image is taken of the process of moving from the inside of the patient's body to the surgical site of the body, the process from the start of the action by the surgical tool to the end of the body to the surgical site, etc. It may be image data.
- the object may be an organ located inside the body of the patient to be operated, or another part other than the organ.
- the reference object may be a specific organ or a specific part or region that satisfies a predetermined condition among objects located inside the body of the surgery subject. That is, the reference object is easy to detect features from the image, is present in a fixed position inside the body, there is no movement or very little at the time of surgery, no deformation of the shape does not occur, is not affected by surgical instruments, medical images Organs or specific internal parts that satisfy at least one of the conditions, such as acquisition of data, such as data captured by CT, PET, etc., may be used. For example, a portion having little movement during surgery such as liver and abdominal disease, and a portion that can be obtained from medical image data such as stomach, esophagus, gallbladder, etc. may be determined as the reference object.
- 3 is a flowchart illustrating a process of setting a reference position for a camera according to an embodiment of the present invention. 3 illustrates the operation of the step S100 in detail.
- the computer may acquire a virtual image including the reference object from the virtual body model of the surgery subject (S110).
- the virtual body model may be 3D modeling data generated based on medical image data photographing the inside of the body of the patient in advance.
- the virtual image may be an image of a viewpoint having the same information as that captured by an actual camera in a virtual body model constructed in 3D.
- the camera entering the patient's body during surgery is to shoot all the objects in the direction the camera looks.
- the computer may select an image of a desired viewpoint, that is, an image of a viewpoint including a reference object, from among the actual surgical images photographed by the camera. Thereafter, the computer may acquire a virtual image in the virtual body model corresponding to the actual surgical image of the viewpoint including the reference object.
- the computer may match the reference object included in the virtual image with the reference object included in the actual surgical image (S120).
- the registration between the two images may be performed by extracting and comparing features of the reference object.
- a depth map may be obtained or features of a reference object may be extracted by applying image segmentation techniques of deep learning.
- the computer may obtain a depth map of the reference object from the actual surgery image, and compare the depth map information with respect to the reference object obtained from the virtual image to determine whether the reference object between the two images is the same object.
- Depth maps can be used to compare surface information of objects (organs). Some objects are difficult to identify by surface features alone.
- the computer may apply a segmentation technique of deep learning to obtain a segmentation image of a reference object and perform registration between the two images.
- the computer may use the segmentation image information together with the depth map information to perform registration between the two images. For example, if an object (long-term) has a smooth surface feature, it is difficult to identify the object through the depth map, so the image segmentation of deep learning can be applied together.
- the surface feature of the object may apply image segmentation, and the depth map may be applied to the three-dimensional structure.
- the virtual body model is generated by including information on the type of the object and location information for each object, such information can be configured as an image map.
- the computer may detect an object region that is expected to be matched in the virtual image based on the feature information of the reference object obtained from the actual surgical image.
- the computer may obtain depth information and segmentation information based on the detected object region in the virtual image, and may construct the map image.
- the computer may compare the map image information of the object obtained from the virtual image with the feature information (depth information and segmentation image information) of the reference object obtained from the actual surgical image to determine whether the reference object is identical between the two images.
- the computer may acquire an actual surgical image of a predetermined viewpoint of the camera, and acquire a virtual image of the virtual body model corresponding to the viewpoint, and perform registration between the two images.
- the computer may recognize the location of the surgical tool or organ (object) with respect to the surgical procedure through deep learning, and thereby obtain an actual surgical image of a predetermined viewpoint of the camera. . That is, the computer recognizes the position of the reference object through deep learning and obtains the reference position of the camera by acquiring the actual surgical image of the recognized viewpoint.
- the computer first recognizes the position of the reference object through deep learning to obtain an actual surgery image of the viewpoint including the reference object, and then corresponds to the actual surgery image of the viewpoint including the reference object.
- Virtual images of the virtual body model at the time point may be matched.
- the matching between the actual surgical image and the virtual image may apply a depth map or an image segmentation technique of deep learning as described above.
- the position of the reference object may be accurately found by first finding an image of a viewpoint including the reference object in the actual surgery image and narrowing the area, and then matching the actual surgery image and the virtual image of the viewpoint.
- the computer may acquire the position of the reference object from the virtual body model based on the result of registration of the reference object between the two images, and set the reference position with respect to the camera based on this.
- the computer can determine the reference position of the camera based on the position of the reference object.
- the computer may calculate the position change amount of the camera as the camera moves (S200).
- the position of the camera may continue to change. Therefore, after setting the reference position of the camera, it is possible to calculate the position of the camera during surgery by calculating the amount of change in accordance with the movement of the camera in real time.
- FIG. 4 is a flowchart illustrating a process of calculating a position change amount of a camera according to an embodiment of the present invention. 4 illustrates the operation of the step S200 in detail.
- the computer may acquire a first image and a second image from which the camera movement is detected from the actual surgical image (S210). That is, the computer detects an image in which the movement of the camera occurs from the actual surgical image photographed by the camera after setting the reference position.
- the computer may detect the first image and the second image in which the camera movement is generated from the actual surgical image by applying the features of the surgical image as described above, such as deep learning.
- both the camera and the surgical tool may consider a case in which a motion occurs. In this case, when the deep learning technique is applied, the first and second images in which the camera motion is detected are detected from the actual surgical image. can do.
- the computer may detect a change between the images by matching the first image and the second image (S220).
- the computer may extract and match the feature points of each of the first and second images.
- the computer may extract each feature point for each of the first image and the second image by using an algorithm such as scale-invariant feature transform (SIFT).
- SIFT scale-invariant feature transform
- various feature point extraction algorithms may be used in addition to the SIFT.
- the computer may match the feature points of the first image and the feature points of the second image with each other, and detect a change between the matched feature points.
- a feature point matching algorithm may be used, for example, brute force, FLANN (Fast approximate nearest neighbor) search, or the like.
- the computer may construct a point cloud by calculating a depth map for each of the first image and the second image.
- corresponding points in the point cloud for each of the two images may also be matched by matching the feature points between the first image and the second image.
- the 3D coordinates are converted into 3D coordinates using depth maps of the first and second images, and the matching is performed in the point cloud between the two images based on the converted 3D coordinates. Can be.
- the computer can know the position change amount of the camera based on the position change of the feature point (corresponding point) between the two images.
- the computer may obtain a depth map through matching of stereo images when constructing a point cloud for the first image and the second image, but this is only one example, and the first image and the second image may be obtained using different methods.
- the point cloud for the second image may be configured and matched.
- a single image may be used to obtain a depth map. Since the size of the surgical tool is known correctly and the size of the surgical tool does not change during surgery, the characteristics of the camera can be kept constant. In this case, a depth value for a point of interest (feature point) may be obtained using a single image.
- the computer may calculate the position change amount of the camera based on the change between images (S230).
- the change amount may be estimated as a change amount according to the movement of the camera.
- the computer may calculate the position change amount of the camera based on the surgical tool according to whether the surgical tool is included in the actual surgical image.
- the computer may determine whether a surgical tool is included in the actual surgical image acquired as the camera movement occurs. For example, the computer may determine whether the surgical tool is included in the first image acquired at the start of the movement of the camera and the second image acquired at the end of the movement of the camera. In this case, the computer may determine whether a surgical tool exists from the first image and the second image by using an image recognition method.
- the computer compares the information of the surgical tool in the first image with the information of the surgical tool in the second image and positions the camera based on the comparison result.
- the amount of change can be calculated relatively.
- the surgical tool since the surgical tool does not move when the camera moves during the operation, the surgical tool does not move in the images (first image and second image) acquired from the start of the movement of the camera to the end of the movement. It is fixed without. Therefore, if a surgical tool is included in the first image and the second image, the corresponding surgical tool may be changed only by the movement of the camera. That is, the computer derives a change in the position, size, direction, etc. of the surgical tool included in each of the first image and the second image, and calculates a relative change in the position of the camera based on the change information between the surgical tools in each image. Can be.
- the computer may determine whether at least one surgical tool is included in the images acquired from the start of the movement of the camera to the end of the movement of the camera. For example, the computer may determine whether a surgical tool is included in all or a part of the image frame acquired at the start of the movement of the camera, and determine whether the surgical tool is included in the image frame at the end of the movement of the camera. Subsequently, the amount of position change of the camera may be relatively calculated by comparing information of the surgical tool between the image frames including the surgical tool according to the determination result.
- the computer may determine whether the surgical tool is included in at least one image frame of the image acquired at the start of the movement of the camera or at the end of the movement of the camera, and relatively calculate the position change of the camera. For example, when the surgical tool is included in the first image acquired when the camera starts to move, or when the surgical tool is included in the second image obtained when the camera stops moving, the computer refers to the image including the surgical tool.
- the moving distance of the camera can be calculated relatively. That is, since the surgical tool is fixed when the camera is moved, the distance to the position where the camera is moved (that is, the start position of the camera or the end position of the camera) is calculated based on the position of the fixed surgical tool. The amount of change in the position of the camera can also be calculated relatively.
- the computer performs the above-described process of FIG.
- the amount of change in position of the camera may be calculated based on the change between images.
- the computer may calculate the current position of the camera based on the position change amount of the camera with respect to the reference position (S300).
- the position change amount of the camera is calculated by the change amount according to the movement from the reference position
- the current position of the camera can be finally calculated by applying the position change amount of the camera at the reference position.
- the computer may calculate the current position of the camera by deriving a change between the first image and the second image through a rotation or translation matrix and reflecting the derived value in a reference position. have.
- the computer may derive the current position of the camera, and may calculate the position of the surgical region or the position of the surgical tool of the currently-operated subject based on this.
- the computer may match the surgical position in the virtual body model corresponding to the surgical position of the surgical subject based on this. That is, by providing the actual surgical position in the virtual body model has the effect of providing more accurate surgical guide information to the current medical staff.
- the computer may acquire an actual surgical image including a new reference object photographed while the camera moves or moves.
- the computer may acquire a new reference object from the actual surgery image and reset the reference position based on the acquired new reference object.
- the computer can also recalculate the current position of the camera based on the reset reference position. This may be applied in the same manner as described in the above steps S100 to S300.
- the computer may recalculate the current position of the camera by resetting the reference position only when the new reference object corresponds to the preset reference object. For example, whenever a new reference object is photographed and acquired according to the movement of the camera, the current position of the camera may be updated by performing the above steps S100 to S300, or a new object is acquired based on a preset time period. In this case, the current position of the camera may be updated by performing the above steps S100 to S300.
- FIG. 5 is a view showing the position of the camera derived according to an embodiment of the present invention on a coordinate plane.
- the actual coordinate space (200, absolute coordinate space) of the camera is set.
- the position 220 of may be derived.
- the actual position of the surgical site and the surgical tool can be calculated based on the position of the camera.
- the relative positions of the surgical site and the surgical tool may be obtained based on the coordinate space 210 of the camera, and the actual positions of the surgical site and the surgical tool may be derived by converting the surgical site and the surgical tool back into a positional relationship on the actual coordinate space 200. .
- FIG. 6 is a diagram schematically illustrating a configuration of an apparatus 300 for performing a camera position calculating method using an actual surgical image according to an embodiment of the present invention.
- the processor 310 may include a connection passage (for example, a bus or the like) that transmits and receives signals with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
- a connection passage for example, a bus or the like
- the processor 310 executes one or more instructions stored in the memory 320 to perform a camera position calculation method using the actual surgical image described with reference to FIGS. 2 to 4.
- the processor 310 acquires a reference object from an actual surgery image taken by a camera entering the body of the surgery subject by executing one or more instructions stored in the memory 320 to obtain a reference position for the camera.
- the position change amount of the camera may be calculated as the camera moves, and the current position of the camera may be calculated based on the position change amount of the camera with respect to the reference position.
- the processor 310 may read random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 310. , Not shown) may be further included.
- the processor 310 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
- SoC system on chip
- the memory 320 may store programs (one or more instructions) for processing and controlling the processor 310. Programs stored in the memory 320 may be divided into a plurality of modules according to their functions.
- the camera position calculation method using the actual surgical image according to the embodiment of the present invention described above may be implemented as a program (or an application) to be executed in combination with a computer which is hardware and stored in a medium.
- image may mean multidimensional data composed of discrete image elements (eg, pixels in a 2D image and voxels in a 3D image).
- the image may include a medical image of the object obtained by the CT imaging apparatus.
- an "object” may be a person or an animal, or part or all of a person or an animal.
- the subject may include at least one of organs such as the liver, heart, uterus, brain, breast, abdomen, and blood vessels.
- the internal body object may include a body part existing inside the body, an object introduced from the outside, an object generated by itself, and the like.
- the body part may be an object existing inside the body such as organs, blood vessels, bones, tendons, nervous tissues, and the like.
- the object introduced from the outside may be a consumable necessary for surgery such as a gauze, a clip, or the like.
- Self-created objects may include bleeding from the body part.
- a "user” may be a doctor, a nurse, a clinical pathologist, a medical imaging professional, or the like, and may be a technician who repairs a medical device, but is not limited thereto.
- medical image data is a medical image photographed by a medical imaging apparatus, and includes all medical images that can be implemented as a three-dimensional model of the body of an object.
- Medical image data may include computed tomography (CT) images, magnetic resonance imaging (MRI), positron emission tomography (PET) images, and the like.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- the term "virtual body model” refers to a model generated according to the actual patient's body based on medical image data.
- the “virtual body model” may be generated by modeling medical image data in three dimensions as it is, or may be corrected as in actual surgery after modeling.
- virtual surgery data refers to data including rehearsal or simulation actions performed on a virtual body model.
- the “virtual surgery data” may be image data for which rehearsal or simulation has been performed on the virtual body model in the virtual space, or may be data recorded for a surgical operation performed on the virtual body model.
- actual surgery data refers to data obtained by performing a surgery by an actual medical staff.
- Real surgery data may be image data photographing the surgical site in the actual surgical procedure, or may be data recorded for the surgical operation performed in the actual surgical procedure.
- a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
- a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
- a head mounted display (HMD) device includes a computing function
- the HMD device may be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- FIG. 7 is a schematic diagram of a system capable of performing robotic surgery in accordance with one embodiment of the present invention.
- the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
- surgical robot 34 includes imaging device 36 and surgical instrument 38.
- the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
- the server 100 is a computing device including at least one processor and a communication unit.
- the controller 30 includes a computing device including at least one processor and a communication unit.
- the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image captured by the imaging device 36 is displayed on the display 32.
- surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
- Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
- the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
- the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
- the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
- the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
- the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
- the surgical image obtained by the imaging device 36 is transmitted to the controller 30.
- information obtained from the external body space of the subject may be applied to the virtual body model in real time at the time of actual surgery. It describes a method of providing accurate location information of the surgical instruments located on the image. First, the virtual body model will be described in detail.
- FIG. 8 is a flowchart schematically illustrating a method of generating a virtual body model according to an embodiment of the present invention.
- each step illustrated in FIG. 8 is performed in time series in the server 100 or the controller 30 illustrated in FIG. 7.
- each step is described as being performed by a computer, but the performing agent of each step is not limited to a specific apparatus, and all or part thereof may be performed by the server 100 or the controller 30. Can be.
- the computer may acquire medical image data of a surgical subject photographed by CT, MRI, PET, etc. (S100).
- the computer may calculate the imaging posture of the surgical subject based on the surgical posture determined based on the location of the affected subject or the operation type. Since the operation posture may vary depending on the location of the patient (ie, the surgical site) and the type of operation, the operation posture is calculated to allow the medical image to be taken in the same posture as the operation posture of the subject. Medical image data can be obtained.
- the computer may generate 3D modeling based on the medical image data of the surgery subject to generate a virtual body model in accordance with the body of the surgery subject (S110). That is, the virtual body model may be 3D modeling data generated by performing 3D rendering of medical image data photographed by a medical imaging apparatus such as CT, MRI, or PET.
- a medical imaging apparatus such as CT, MRI, or PET.
- CT image data acquired with a particular Hounsfield Unit (HU) value is used by applying the same imaging posture to a specific patient
- the color (ie, gray) on each CT image frame is used.
- 3D modeling data of the patient's body can be generated by calculating the density of each point based on the scale).
- the computer may generate 3D modeling data by applying a correction algorithm that changes from image data photographed under general photographing conditions to a body state during a specific surgery.
- the computer may generate 3D modeling data by applying a body change formed in a relief model according to body changes due to gravity or injecting carbon dioxide in a tilted state to medical image data photographing a patient lying in a general state.
- the correction algorithm may be calculated by learning medical image data about general photographing conditions and physical conditions and learning data formed by matching physical state data during surgery of a specific physical condition and surgical condition.
- the body state data during surgery may include image data photographing the shape or arrangement of the internal organs of the body during actual surgery, or image data photographing the body surface.
- the computer when the relief model is formed during laparoscopic surgery or robotic surgery, the computer may be configured as learning data to perform learning by matching the medical image data of the patient and the body surface data during the surgery.
- Body surface data may be taken by a medical staff prior to surgery.
- the medical team may acquire body surface data by photographing the modified abdominal surface by injecting carbon dioxide before performing laparoscopy or robotic surgery.
- the computer may generate a correction algorithm (eg, a relief algorithm) for forming 3D modeling data of a general state into a relief model during laparoscopy or robot surgery as the learning data is learned. Thereafter, when the medical image data of the new patient is acquired, the computer may generate a relief model by modifying the 3D modeling data generated based on the medical image data through a correction algorithm.
- a correction algorithm eg, a relief algorithm
- medical personnel can perform simulations or rehearsals that provide virtual surgery through a virtual body model.
- it may provide a virtual image (that is, 3D modeling image) through the virtual body model along with the actual surgical image during the actual surgery. Accordingly, more accurate surgical information can be provided by reflecting and displaying the information on the physical part which is not displayed or difficult to identify in the actual surgical image on the virtual body model.
- the camera photographs the first surgical site existing in the field of view of the camera and a surgical tool located at the site. Then, when it is necessary to proceed with the operation from the first surgical site to the second surgical site, the surgical instrument is moved by the camera moves to the second surgical site, the target point by the surgical robot. At this time, only the second surgical site is photographed in the field of view of the camera moved to the second surgical site, and the surgical tool is not photographed by the camera. Therefore, since the location information about the surgical tool or the information about the inside of the body due to the movement of the surgical tool cannot be understood through the surgical image, the medical staff in charge of the operation cannot receive accurate surgical information.
- the present invention proposes a method that can provide accurate information of the position of the surgical tool and the internal body of the patient by applying a virtual body model during the minimally invasive surgery such as laparoscopic surgery or robot surgery do.
- FIG. 9 is a flowchart illustrating a method of calculating position information of a surgical tool according to an embodiment of the present invention.
- each step shown in FIG. 9 is performed in time series in the server 100 or the controller 30 shown in FIG. 7.
- each step is described as being performed by a computer, but the performing agent of each step is not limited to a specific apparatus, and all or part thereof may be performed by the server 100 or the controller 30. Can be.
- the present invention can be applied in the case of minimally invasive surgery such as laparoscopic surgery or robotic surgery.
- surgery is performed by inserting a surgical tool into the internal space of the subject.
- the surgical tool may be configured to include an operating part inserted into the body to perform a surgical operation and an arm coupled to the operating part to extend to the outside of the body.
- the operation unit is a part capable of performing an operation such as catching, cutting, moving, or suturing a target object by accessing the surgical site, and may be configured with various instruments according to the purpose of the operation.
- the arm is connected to the operating unit to control the movement of the operating unit.
- the medical staff conducting the surgery acquires a surgical image of the operation part of the surgical tool by using a camera inserted into the body of the patient, and then grasps the position or movement of the operation part of the surgical tool through this. Perform the surgery.
- the field of view is limited according to the position or direction of the camera as described above, information about the surgical instruments inside the body is also limited. Therefore, in the present invention, more accurate surgical information (that is, by applying to the virtual body model using information that can be obtained from the external body space without relying only on the information of the internal body space included in the surgical image obtained by the camera) Location of the surgical instruments).
- a method of calculating position information of a surgical tool may be performed based on sensing information obtained from a sensing device attached to a surgical tool inserted into a body space of a surgical target by a computer.
- the computer may calculate a position of the first point of the surgical tool with respect to the external space of the surgical subject based on the sensing information obtained from the sensing device attached to the surgical tool inserted into the internal space of the surgical subject (S200).
- the first point position of the surgical tool refers to spatial position information in which a specific point (eg, one end) of the cancer visible in the external space is located from the inside of the patient's body to the outside of the body.
- the second point position of the surgical tool refers to spatial position information in which a specific point (eg, a distal end) of an operating part inserted into the body of a surgical subject is located.
- the sensing device may be installed on the arm of the surgical tool, and may transmit and receive a sensing signal to sense the position information, motion information, and the like of the surgical tool.
- the sensing device may use an infrared measurement sensor, an electromagnetic sensor, an inertial measurement unit (IMU), a motion measurement sensor, or the like. If an infrared measuring sensor is installed on the arm of the surgical tool, the computer can identify the position of the surgical tool by acquiring a sensing signal from the infrared measuring sensor installed on the arm of the surgical tool through the infrared camera installed on the top of the operating table or the surgical robot during the operation. have.
- the computer may determine the location of the surgical tool by generating electromagnetic waves in the surgical space during surgery to obtain position information of the electromagnetic sensor.
- the sensing device installed on the arm of the surgical tool it is possible to obtain the position information of the surgical tool (that is, the cancer) on the outer space of the subject of the surgery.
- the computer may calculate the position of the first point of the surgical tool by using the reference position on the body surface of the surgical subject together with the sensing information obtained by the sensing device attached to the surgical tool.
- the reference position refers to at least one specified point on the body surface of the surgical subject.
- the reference point may be a central location or a representative point on the body, or an insertion point at which a surgical tool is inserted during minimally invasive surgery.
- the position sensing device may be attached to at least one specified point on the outer body surface of the subject to obtain a reference position based on information sensed therefrom.
- the reference position can be expressed as coordinate information on the outer body space (surgical space).
- the computer obtains at least one reference position specified on the surface of the subject of the surgical subject, and based on the obtained at least one reference position, sensing information of the surgical tool (ie, surgery obtained by the sensing device).
- the location information of the tool may be used to derive the location of the first point of the surgical tool (ie, the cancer) in the external body space. That is, the computer may calculate coordinate information of a point where the surgical tool (ie, the cancer) is located on the outer body space relative to the reference position.
- the computer reflects the characteristic information of the surgical tool based on the position of the first point of the surgical tool, and thus the second point of the surgical tool with respect to the internal space of the subject.
- the position may be calculated (S210).
- the virtual body model is 3D modeling data generated based on the medical image data of the surgical subject, and is implemented in the same manner as the physical state of the surgical subject during the actual surgery, such as AR (Augmented Reality) or MR (Mixed Reality).
- AR Augmented Reality
- MR Mated Reality
- the technique may be provided to the medical staff along with the actual surgical images during the actual surgery.
- the characteristic information of the surgical tool is information determined according to the type and operation type of the surgical tool, that is, the computer can determine the length of the specified surgical tool and the degree of joint freedom (eg rotation angle, direction, etc.) when the surgical tool to be used at the time of operation is specified. ), Unique information such as the degree of movement can be obtained as the characteristic information.
- the computer may acquire characteristic information including at least one of length information and motion information of the surgical tool, and reflect the acquired characteristic information of the surgical tool in the virtual body model. At this time, the computer may derive the position of the second point of the surgical tool from the virtual body model by reflecting the characteristic information of the surgical tool in the virtual body model based on the position of the first point of the surgical tool calculated in step S200. . In other words, if the computer can know the length information and the motion information of each of the operating part and the arm, the computer can grasp the positional relationship from the arm located in the outer body space to the operating part located in the inner body space.
- the computer knows the length of the arm and the moving part, and grasps the motion information such as the angle and the degree of inclination between the two according to the movement of the arm and the moving part, thereby determining the Relatively, the position of the second point of the surgical tool (ie, the moving part) can be calculated.
- the position of each of the first point and the second point changes, and between the first point and the second point for identifying such a positional relationship. It may be information indicating an angle, degree of inclination, and the like.
- the motion information of the surgical tool can be grasped by installing an inertial measurement sensor or a motion measurement sensor on each of the arm and the operation unit.
- the motion information of the surgical tool may be sensed from the outside of the body and the inside of the body, it may be more effective in calculating the positional relationship between the arm and the operating unit.
- the surgical tool in acquiring the characteristic information of the surgical tool, when the surgical tool includes at least one joint for connecting the first point and the second point (for example, the surgical tool is at least one operating unit and at least one When the arm is configured to include a plurality of joints for driving and connecting them when the computer is configured, the number of joints, the driving information of the joints (for example, the degree of freedom of each joint, the degree of movement, etc.) as the characteristic information It may be obtained in advance from the surgical robot.
- the computer may calculate the second point position from the first point position of the surgical tool by additionally reflecting the joint characteristic information to the virtual body model.
- the computer may provide the position information of the surgical tool in the actual internal body space of the surgical subject based on the position of the second point of the surgical tool with respect to the virtual body model (S220).
- the computer reflects the surgical tool on the virtual body model based on the position of the first point of the surgical tool (ie, the operation unit) calculated in step S210 to provide the same environment as the actual surgery through the virtual image. can do. Accordingly, the medical staff may acquire the actual position of the surgical tool located inside the body of the surgical subject.
- the surgical tool may not be visible in the surgical image acquired by the camera inserted into the body of the patient. have.
- according to the present invention can provide a precise physical position of the surgical tool through the virtual body model without relying only on the surgical image, it can be induced to guide the medical staff more effective surgery.
- the computer can derive the positional relationship between the surgical tool (ie, the distal end of the moving part) and the internal body object by reflecting the surgical tool on the virtual body model, and determine the movement of the surgical tool or operate based on the positional relationship.
- the collision between the tool and the internal body object may be determined to provide additional information.
- the internal body object may include a body part existing inside the body, an object introduced from the outside, an object generated by itself, and the like.
- the body part may be an object existing inside the body such as organs, blood vessels, bones, tendons, nervous tissues, and the like.
- the object introduced from the outside may be a consumable necessary for surgery such as a gauze, a clip, or the like.
- Self-created objects may include bleeding from the body part.
- FIG. 10 is a diagram schematically showing the configuration of an apparatus 300 for performing a method for calculating position information of a surgical tool according to an embodiment of the present invention.
- the processor 310 may include a connection passage (for example, a bus or the like) that transmits and receives a signal with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
- a connection passage for example, a bus or the like
- the processor 310 executes one or more instructions stored in the memory 320 to generate the virtual body model described with reference to FIGS. 8 to 9 and a method of calculating position information of a surgical tool. Do this.
- the processor 310 executes one or more instructions stored in the memory 320, and based on the sensing information obtained from the sensing device attached to the surgical tool inserted into the internal space of the patient, the external part of the patient's body is operated. Calculate the location of the first point of the surgical tool with respect to the space, and reflect the characteristic information of the surgical tool based on the position of the first point of the surgical tool through the virtual body model generated in accordance with the physical state of the surgical subject.
- the second point position of the surgical tool relative to the internal body space of the patient can be calculated, and based on the position of the second point of the surgical tool for the virtual body model can provide the position information of the surgical tool in the actual internal body space of the subject. have.
- the processor 310 may read random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 310. , Not shown) may be further included.
- the processor 310 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
- SoC system on chip
- the memory 320 may store programs (one or more instructions) for processing and controlling the processor 310. Programs stored in the memory 320 may be divided into a plurality of modules according to their functions.
- the method for calculating the position information of the surgical tool according to the embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a computer which is hardware and stored in a medium.
- image may mean multidimensional data composed of discrete image elements (eg, pixels in a 2D image and voxels in a 3D image).
- the image may include a medical image of the object obtained by the CT imaging apparatus.
- an "object” may be a person or an animal, or part or all of a person or an animal.
- the subject may include at least one of organs such as the liver, heart, uterus, brain, breast, abdomen, and blood vessels.
- a "user” may be a doctor, a nurse, a clinical pathologist, a medical imaging professional, or the like, and may be a technician who repairs a medical device, but is not limited thereto.
- medical image data is a medical image photographed by a medical imaging apparatus, and includes all medical images that can be implemented as a three-dimensional model of the body of an object.
- Medical image data may include computed tomography (CT) images, magnetic resonance imaging (MRI), positron emission tomography (PET) images, and the like.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- the term "virtual body model” refers to a model generated according to the actual patient's body based on medical image data.
- the “virtual body model” may be generated by modeling medical image data in three dimensions as it is, or may be corrected as in actual surgery after modeling.
- virtual surgery data refers to data including rehearsal or simulation actions performed on a virtual body model.
- the “virtual surgery data” may be image data for which rehearsal or simulation has been performed on the virtual body model in the virtual space, or may be data recorded for a surgical operation performed on the virtual body model.
- actual surgery data refers to data obtained by performing a surgery by an actual medical staff.
- Real surgery data may be image data photographing the surgical site in the actual surgical procedure, or may be data recorded for the surgical operation performed in the actual surgical procedure.
- coordinate system (coordinate information) of the surgical robot is coordinate information used by the surgical robot itself, and may be a coordinate system independently set in the surgical robot.
- coordinate system (coordinate information) of a virtual body model refers to coordinate information used on a virtual body model
- coordinate system of a virtual body model refers to "surgical operation as the actual surgery for a patient (actual patient) is started.
- the position may be adjusted in accordance with the coordinate system of the robot, and thus the coordinate system of the virtual body model matched with the coordinate system of the surgical robot is specifically referred to as the “deformed coordinate system of the virtual body model”.
- the coordinate system of the virtual body model before matching with the coordinate system of the surgical robot is referred to as "initial coordinate system of the virtual body model” or "existing coordinate system of the virtual body model”.
- a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
- a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
- a head mounted display (HMD) device includes a computing function
- the HMD device may be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- FIG. 11 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
- surgical robot 34 includes imaging device 36 and surgical instrument 38.
- the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
- the server 100 is a computing device including at least one processor and a communication unit.
- the controller 30 includes a computing device including at least one processor and a communication unit.
- the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image captured by the imaging device 36 is displayed on the display 32.
- surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
- Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
- the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
- the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
- the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
- the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
- the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
- the surgical image obtained by the imaging device 36 is transmitted to the controller 30.
- FIG. 12 is a flowchart schematically illustrating a method of generating a virtual body model according to an embodiment of the present invention.
- each step shown in FIG. 12 is performed in time series in the server 100 or the controller 30 shown in FIG. 11.
- each step is described as being performed by a computer, but the performing agent of each step is not limited to a specific apparatus, and all or part thereof may be performed by the server 100 or the controller 30. Can be.
- the computer may acquire medical image data of a surgical subject photographed by CT, MRI, PET, etc. (S100).
- the computer may calculate the imaging posture of the surgical subject based on the surgical posture determined based on the location of the affected subject or the operation type. Since the operation posture may vary depending on the location of the patient (ie, the surgical site) and the type of operation, the operation posture is calculated to allow the medical image to be taken in the same posture as the operation posture of the subject. Medical image data can be obtained.
- the computer may generate 3D modeling based on the medical image data of the surgery subject to generate a virtual body model in accordance with the body of the surgery subject (S110). That is, the virtual body model may be 3D modeling data generated by performing 3D rendering of medical image data photographed by a medical imaging apparatus such as CT, MRI, or PET.
- a medical imaging apparatus such as CT, MRI, or PET.
- CT image data acquired with a particular Hounsfield Unit (HU) value is used by applying the same imaging posture to a specific patient
- the color (ie, gray) on each CT image frame is used.
- 3D modeling data of the patient's body can be generated by calculating the density of each point based on the scale).
- the computer may generate 3D modeling data by applying a correction algorithm that changes from image data photographed under general photographing conditions to a body state during a specific surgery.
- the computer may generate 3D modeling data by applying a body change formed in a relief model according to body changes due to gravity or injecting carbon dioxide in a tilted state to medical image data photographing a patient lying in a general state.
- the correction algorithm may be calculated by learning medical image data about general photographing conditions and physical conditions and learning data formed by matching physical state data during surgery of a specific physical condition and surgical condition.
- the body state data during surgery may include image data photographing the shape or arrangement of the internal organs of the body during actual surgery, or image data photographing the body surface.
- the computer when the relief model is formed during laparoscopic surgery or robotic surgery, the computer may be configured as learning data to perform learning by matching the medical image data of the patient and the body surface data during the surgery.
- Body surface data may be taken by a medical staff prior to surgery.
- the medical team may acquire body surface data by photographing the modified abdominal surface by injecting carbon dioxide before performing laparoscopy or robotic surgery.
- the computer may generate a correction algorithm (eg, a relief algorithm) for forming 3D modeling data of a general state into a relief model during laparoscopy or robot surgery as the learning data is learned. Thereafter, when the medical image data of the new patient is acquired, the computer may generate a relief model by modifying the 3D modeling data generated based on the medical image data through a correction algorithm.
- a correction algorithm eg, a relief algorithm
- medical personnel can perform simulations or rehearsals that provide virtual surgery through a virtual body model.
- the 3D modeling image through the virtual body model can be provided along with the actual surgical image during the actual surgery. Accordingly, more accurate surgical information can be provided by displaying information on body parts that are not displayed or difficult to identify in the actual surgical image on the virtual body model.
- the camera photographs the first surgical site existing in the field of view of the camera and a surgical tool located at the site. Then, when it is necessary to proceed with the operation from the first surgical site to the second surgical site, the camera is moved to the second surgical site, the target point by the surgical robot and then the surgical tool is moved. At this time, only the second surgical site is photographed in the field of view of the camera moved to the second surgical site, and the surgical tool is not photographed by the camera. Therefore, since the location information about the surgical tool or the information about the inside of the body due to the movement of the surgical tool cannot be grasped through the surgical image, the medical staff in charge of the operation cannot receive accurate surgical information.
- the present invention proposes a method for providing accurate information of the position of the surgical tool and the internal object of the patient by applying a virtual body model during minimally invasive surgery such as laparoscopic surgery or robot surgery.
- the position is changed as the operation proceeds using the matched two coordinate system Identify the location of surgical instruments and use them to determine the relationship with objects inside the body.
- FIG. 13 is a flowchart illustrating a method of providing location information of a surgical tool according to an embodiment of the present invention.
- each step shown in FIG. 13 is performed in time series in the server 100 or the controller 30 shown in FIG. 11.
- each step is described as being performed by a computer, but the performing agent of each step is not limited to a specific apparatus, and all or part thereof may be performed by the server 100 or the controller 30. Can be.
- the computer acquires coordinate information of a surgical robot including a surgical tool based on a reference point of the surgical subject (S200), Matching the coordinate information of the virtual body model generated in accordance with the physical state of the surgical target with the coordinate information of the surgical robot (S210), in the virtual body model corresponding to the position of the surgical tool obtained from the coordinate information of the surgical robot Computing the position of the surgical tool (S220) may be included.
- S200 coordinate information of a surgical robot including a surgical tool based on a reference point of the surgical subject
- Matching the coordinate information of the virtual body model generated in accordance with the physical state of the surgical target with the coordinate information of the surgical robot (S210), in the virtual body model corresponding to the position of the surgical tool obtained from the coordinate information of the surgical robot Computing the position of the surgical tool (S220) may be included.
- the computer may acquire coordinate information (coordinate system) of the surgical robot including the surgical tool based on the reference point of the surgical subject (S200).
- the computer may specify a reference point on the surface of the body of the operator and acquire coordinate information of the surgical robot set based on the specified reference point.
- the reference point refers to a specific point of the body surface of the surgical subject, for example, may be defined as a reference point that is located centrally or representative on the body.
- the computer may specify a reference point from the body surface of the subject using a marker that responds to a particular light source.
- an identification mark ie, a reference point
- the surgical robot detects a point corresponding to the marker or the identification mark displayed on the body surface, and sets the coordinate space (coordinate system) of the surgical robot based on this.
- the computer may acquire coordinate space information of the surgical robot set based on the reference point from the surgical robot.
- the coordinate space (coordinate system) of the surgical robot refers to a coordinate system in which the initial coordinate space of the surgical robot is set as the center (for example, the origin) of the surgical target as the actual operation of the subject (actual patient) starts.
- the center for example, the origin
- the coordinate space in which the reference point of the surgery target and the initial coordinate space of the surgical robot are matched.
- the computer may display an identification mark (ie, a reference point) on the body surface of the surgical subject using a 3D projection method.
- the computer projects multiple grids of grid (grids) at the top of the operating table or through a projector mounted on the surgical robot during surgery, and marks on the surface of the patient's body through the camera mounted on the operating table or on the surgical robot (ie The grid pattern deformed by the shape of the body surface can be photographed.
- the computer recognizes the shape of the body surface of the surgery subject in 3D, determines a specific point as a reference point based on a grid pattern displayed on the body surface, and sets a coordinate space (coordinate system) of the surgical robot based on the reference point.
- the computer may acquire the coordinate space information of the surgical robot set based on the reference point of the surgical robot.
- the computer may obtain the coordinate information (coordinate system) of the surgical robot including a surgical tool directly from the surgical robot. That is, since the surgical robot has its own coordinate information, the computer may acquire the coordinate information of the surgical robot including the surgical tool directly from the surgical robot.
- the computer may acquire initial coordinate information (coordinate system) of the virtual body model generated in accordance with the physical state of the surgical subject and match it with the coordinate information (coordinate system) of the surgical robot (S210).
- the virtual body model is 3D modeling data generated based on the medical image data of the surgical subject, and is implemented in the same manner as the physical state of the surgical subject during the actual surgery, such as AR (Augmented Reality) or MR (Mixed Reality).
- the technique may be provided to the medical staff along with the actual surgical images during the actual surgery.
- the computer may first acquire a reference point of the surgical subject in the virtual body model.
- the computer may set the same point as the reference point of the surgical target used in setting the coordinate information of the surgical robot as the reference point of the surgical target in the virtual body model in step S200.
- the computer may set a specific point on the surface of the patient's body as a reference point in the virtual body model.
- the computer may match initial coordinate information of the virtual body model with coordinate information of the surgical robot based on a reference point in the virtual body model.
- the reference point of the virtual body model and the reference point of the surgical robot may be matched to map the coordinate space (coordinate system) of the surgical robot and the coordinate space (coordinate system) of the virtual body model to each other. Therefore, by mapping the coordinate information of the surgical robot to the initial coordinate information of the virtual body model, the computer may derive the modified coordinate information of the virtual body model centered on the reference point of the surgical subject.
- the computer may calculate the position of the surgical tool in the virtual body model corresponding to the position of the surgical tool obtained from the coordinate information of the surgical robot based on the mutual coordinate information matched in step S210 (S220).
- the coordinate information of the surgical robot means coordinate information set by matching the initial coordinate space of the surgical robot with the reference point of the surgical subject as described above.
- the computer may acquire the position information of the surgical tool located inside the body of the subject during the actual surgery based on the coordinate information of the surgical robot, which is mapped to the coordinate information of the surgical robot Can be calculated by converting the coordinate information in the model. Thereafter, the computer may provide the same virtual image as the actual surgical environment by reflecting the surgical tool on the virtual body model based on the position of the surgical tool calculated by converting the coordinate information in the virtual body model. Therefore, the medical staff in charge of the surgery can determine the actual position of the surgical instruments located inside the body of the patient from the virtual body model.
- the computer may derive a positional relationship between the internal object of the surgical subject and the surgical tool in the virtual body model based on the position of the surgical tool in the virtual body model.
- Various information at the time of surgery may be provided based on the positional relationship.
- the object inside the body of the surgical subject may include a body part existing inside the body, an object introduced from the outside, an object generated by itself, and the like.
- the body part may be an object existing inside the body such as organs, blood vessels, bones, tendons, nervous tissues, and the like.
- the object introduced from the outside may be a consumable necessary for surgery such as a gauze, a clip, or the like.
- Self-created objects may include bleeding from the body part.
- the computer may determine whether or not the object of the surgical object and the surgical tool collide in the virtual body model based on the position of the surgical tool in the virtual body model, and may provide a determination result. That is, since the virtual body model is generated by including information about the position, size, and shape of each body part with respect to the surgical subject, the virtual body model does not exist in a fixed position during the actual surgery and knows the position information of the surgical tool that the movement occurs. If possible, a virtual body model can be used to calculate the positional relationship between objects inside the body and surgical instruments. Through this, the computer determines whether the surgical tool is in close proximity to or is in danger of contact with an object inside the body in real time during surgery, and whether the collision occurs when the surgical tool is moved in a specific direction. Can detect and provide guidance on risk situations.
- the computer when the state of the internal object of the patient is changed during the operation by the surgical robot, the computer reflects the changed state of the internal object in the virtual body model, and uses the virtual body model to determine the position of the surgical tool. On the basis of this, it may be determined whether the collision with the object inside the body. For example, during operation by a surgical robot, when a surgical operation (such as tongs) is used to lift an organ or cut blood vessels, a state such as the position, size, or shape of the organ may change. do. In this case, since information on surgical instruments and internal objects (eg, organs) are changed, a dangerous situation such as a crash situation or a sudden situation may occur.
- a surgical operation such as tongs
- the computer may reflect the operation process during the actual surgery through the simulation process or the simulation result of the virtual body model.
- the computer can determine the position of the surgical instrument and the location of the changed internal body object to provide accurate information on whether the situation is dangerous.
- the computer calculates a position change according to the movement of the surgical tool from the coordinate information of the virtual body model, and according to the calculated position change, the virtual body model.
- a position change is calculated according to the movement of the surgical tool from the coordinate information of the virtual body model, and according to the calculated position change, the virtual body model.
- the computer may obtain information about an internal body object present on the movement path by using the virtual body model when the surgical tool is moved from the first surgical site to the second surgical site. That is, the computer may calculate a position change according to the movement path of the surgical tool, and calculate a positional relationship with an internal object in the virtual body model based on the position change. The positional relationship may be used to determine whether a collision occurs due to the movement of the surgical tool, and provide a determination result.
- the virtual body model may provide a result of simulating the movement of the surgical tool.
- the computer may directly obtain the coordinate information of the surgical robot itself including the surgical tool from the surgical robot.
- the surgical robot may display its coordinate information. Because it can be used to directly determine the location of the surgical tool according to the surgical tool movement. That is, the computer directly obtains the position of the surgical tool from the surgical robot, and based on this, it can be applied in the virtual body model to determine whether the collision with the object inside the body along the movement path of the surgical tool.
- the position of the surgical tool that is not visible through the surgical image can be accurately identified, and it can be expressed in the virtual body model. Therefore, based on this, the part which is not visible through the surgical image can be displayed by reflecting on the actual surgical image by using the virtual body model.
- the computer generates the data (eg, virtual image data, etc.) including the same information as the actual surgery by reflecting the surgical tool on the virtual body model based on the position of the surgical tool obtained through the virtual body model can do.
- the computer adjusts the field of view of the camera on the virtual body model to perform the operation that is not visible through the actual surgical image.
- the virtual image data including the tool may be acquired and displayed on the actual surgical image.
- the position of the surgical tool can not be accurately calculated through the surgical image, based on the position of the surgical tool between the surgical tool or surgical instruments and organs The collision between them can be prevented.
- the surgical tool is removed from the inside of the body and inserted again.
- the camera is zoomed in and the surgical tool is slowly moving, so the surgical tool It takes considerable time for the camera to enter the camera's field of view.
- the position of the surgical tool that is not in the camera's field of view may be implemented by displaying in minimap form. Therefore, even if the surgical tool is not in the camera's field of view, the medical staff can determine the location of the surgical tool through the mini-map, it is possible to determine whether the collision between the surgical tool or the surgical tool and the organ to prevent a dangerous situation.
- the display may be displayed on a screen used by the assistant.
- the assistant manpower does not perform actual surgery (eg, performs direct medical care by using a surgical robot), but refers to a person who is in charge of assistive surgical actions around the patient.
- a haptic stimulus may be provided to a device such as a controller that the medical staff is operating to inform whether the medical robot is in a dangerous situation.
- the medical staff may wear a device having a haptic function provided separately from an operation device such as a controller, so that information about a dangerous situation may be provided during an actual operation.
- FIG. 14 is a view schematically showing the configuration of an apparatus 300 for performing a method for providing position information of a surgical tool according to an embodiment of the present invention.
- the processor 310 may include a connection passage (for example, a bus or the like) that transmits and receives a signal with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
- a connection passage for example, a bus or the like
- the processor 310 executes one or more instructions stored in the memory 320 to generate a virtual body model described with reference to FIGS. 12 to 13 and a method of providing position information of a surgical tool. Do this.
- the processor 310 executes one or more instructions stored in the memory 320 to obtain coordinate information of the surgical robot including the surgical tool based on the reference point of the surgical subject, and to match the physical state of the surgical subject.
- the coordinate information of the generated virtual body model may be matched with the coordinate information of the surgical robot, and the position of the surgical tool in the virtual body model corresponding to the position of the surgical tool obtained from the coordinate information of the surgical robot may be calculated.
- the processor 310 may read random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 310. , Not shown) may be further included.
- the processor 310 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
- SoC system on chip
- the memory 320 may store programs (one or more instructions) for processing and controlling the processor 310. Programs stored in the memory 320 may be divided into a plurality of modules according to their functions.
- the method for providing position information of the surgical tool according to the embodiment of the present invention described above may be implemented as a program (or application) to be executed in combination with a computer which is hardware and stored in a medium.
- the present invention is to propose a method that can utilize the surgical information (that is, surgical image) that can be obtained in the surgical process as described above.
- a method proposed by the present invention a method of providing actual position information of a camera in a surgery currently performed on a specific patient based on a surgical image will be described.
- a "computer” is described as performing the embodiments disclosed herein.
- “Computer” may be used in the sense of encompassing devices capable of performing computing processing.
- a “computer” is a variety of devices that can perform computational processing and provide results to users, as well as smart phones, tablet PCs, and cellular phones as well as desktop PCs and notebooks.
- phone Personal Communication Service Phone
- IMT-2000 International Mobile Telecommunication-2000
- PDA Palm Personal Computer
- PDA Personal Digital Assistant
- the HMD device may be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- 15 is a flowchart illustrating a method for providing a surgical image-based camera position according to an embodiment of the present invention.
- the method for providing a surgical image-based camera position performed by a computer includes obtaining a surgical image taken as the camera enters a body and moves a surgical path ( S100), deriving camera position information on the standard body model based on the surgical image (S110), and providing camera position information on the virtual body model of the current surgical subject based on the camera position information on the standard body model. It may include the step (S120). Hereinafter, a detailed description of each step will be described.
- the computer may acquire a surgical image taken as the camera enters the body and moves the surgical path (S100).
- the computer may acquire the actual surgery image according to the actual surgery.
- the computer may acquire a real surgical image photographed when a minimally invasive surgery such as robotic surgery or laparoscopic surgery is performed, or may be data obtained as the actual medical staff performs the surgery.
- the surgical image may be a surgical image obtained by performing surgery on at least one patient.
- the same type of surgery is performed in a similar surgical process even if the doctor or the patient is different.
- the surgical images obtained for any number of patients include similar surgical movements or surgical routes.
- the computer may obtain a surgical image from at least one patient, and use it as learning data for learning.
- the computer can be trained using the training data in the standard body model to be described later. A detailed process thereof will be described later.
- FIGS. 16A, 16B, and 16C are different from each other, surgical paths 200, 210, and 220 in which surgery is performed ) Are similar to each other. Therefore, when surgery is performed along such a surgical path, it is possible to obtain a surgical image including a similar surgical path for each patient shown in (a), (b), (c).
- the computer may derive camera position information on the standard body model based on the surgical image (S110).
- the standard body model may be a three-dimensional body model generated by standardizing the anatomical features of the body.
- each body part is three-dimensional by standardizing anatomical features such as shape, size, and location for each part of the body (e.g. liver, heart, uterus, brain, breast, abdomen, and blood vessels, etc.).
- It may be a body model constructed by modeling.
- the standard body model may be a body model generated by standardizing the anatomical characteristics of each body according to various conditions such as gender, age, appearance conditions, etc., may be implemented as at least one standard body model according to these various conditions have.
- the computer may designate a surgical image corresponding to at least one reference point on the surgical path from the surgical image.
- the reference point may be at least one point that satisfies a predetermined condition among points where the camera enters the body and moves along the surgical path.
- a reference point may be defined as a point where the camera does not move for a predetermined time on the surgical path.
- a point at which the camera may be positioned on the surgical path may be determined as a reference point based on a predetermined time interval.
- the computer may map to a corresponding point on the standard body model using a surgical image corresponding to at least one reference point.
- the computer may calculate coordinate information of a corresponding point on the standard body model and derive it as camera position information on the standard body model.
- the camera position information may refer to coordinate information on a three-dimensional space in the standard body model to be viewed at the same viewpoint as the surgical image corresponding to at least one reference point.
- the computer may acquire a surgical image 300, and detect the surgical images 310 and 320 corresponding to at least one reference point in the acquired surgical image 300.
- a point at which the movement of the camera does not occur on the surgical path inside the patient's body may be designated as a reference point.
- the computer may recognize, as a reference point, a surgical image at the time when the movement of the camera does not occur, that is, a point corresponding to the surgical images in which the same background image appears continuously among the surgical images 300.
- the computer performs a total operation on the surgical images 310 and 320 of the last time point, that is, the surgical images 310 and 320 corresponding to the time point of the camera movement, among the surgical images at the time when the camera movement does not occur. It may be detected from the image 300. Thereafter, the computer may map the points corresponding to the detected surgical images 310 and 320 onto the surgical path image 330 of the standard body model, and calculate the position information of the mapped points as coordinate information of the standard body model. Can be. For example, when the standard body model is three-dimensional modeled data, it may be calculated as coordinate information about a three-dimensional space. 3 may be implemented through learning.
- the learning method may be a machine learning method such as supervised learning, unsupervised learning, reinforcement learning, for example, a deep learning-based convolutional neural network (CNN).
- CNN convolutional neural network
- a person eg, a doctor
- CNN convolutional neural network
- the computer may designate a surgical image corresponding to at least one reference point on the surgical path from the surgical image.
- the computer may acquire a virtual body model of the patient corresponding to the surgical image, and map the surgical image corresponding to the at least one reference point on the virtual body model of the patient. This may be similar to the process of FIG. 17 described above.
- the computer may derive the point mapped on the virtual body model of the patient by converting the camera position information on the standard body model.
- the computer may perform a process of converting the coordinate information of the virtual body model to the coordinate information of the standard body model, for example, to match the coordinate information between each other using a transformation matrix.
- FIG. 18 is a diagram for explaining an example of a process of converting coordinate information of a virtual body model of a patient into coordinate information of a standard body model according to an embodiment of the present invention.
- Coordinate information of the standard body model is derived by converting the virtual body model for each patient shown in FIGS. 18A, 18B, and 18C using transformation matrixes T A , T B , and T C. can do.
- the computer can finally calculate the coordinate information transformed into the standard body model space from the surgical image obtained for each patient shown in (a), (b), (c) of FIG.
- the computer may derive the positional information on the standard body model through learning by configuring the surgical image as the training data. This learning process will be described with reference to FIGS. 20 and 21.
- the computer may provide the camera position information on the virtual body model of the current surgery subject based on the camera position information on the standard body model (S120).
- the current surgical subject may be different from the patient corresponding to the surgical image obtained in the above-described step S100, and may mean a patient currently performing the actual surgery.
- the computer may convert the coordinate information on the standard body model to the coordinate information on the virtual body model of the current surgical subject.
- the computer may calculate a corresponding position corresponding to the camera position information on the standard body model from coordinate information converted on the virtual body model of the current surgery subject. That is, at the corresponding camera position between the standard body model and the virtual body model of the current surgery subject, the camera may have the same camera viewpoint. Therefore, the camera view of the surgical image acquired in step S100 and the virtual body model of the current surgery subject are finally obtained. Camera viewpoints may be mapped identically.
- the computer may grasp the location information of the current camera on the virtual body model of the current surgical subject through the mapping with the standard body model based on the surgical image acquired in step S100.
- the transformation may be performed for the whole body or for each organ in the body.
- a linear transformation may be applied or a nonlinear transformation may be applied.
- the computer may extract the reference position on the standard body model, and extract the corresponding reference position on the virtual body model corresponding to the extracted reference position.
- the coordinate information on the standard body model can be converted into the coordinate information on the virtual body model.
- the computer can then derive the camera position information from the transformed coordinate information on the virtual body model.
- the reference position may be determined by extracting a specific organ in the body as a reference position, or by extracting each organ in the body to determine the reference position of each organ (for example, the center point of the organ) as a reference position.
- the transformation when the transformation is performed on the entire body, the transformation may be performed by extracting a specific organ as a reference and applying a transformation matrix based on the corresponding position.
- the transformation when the transformation is performed for each organ, the transformation may be performed by extracting a reference position for each organ and applying a transformation matrix based on each extracted reference position.
- FIG. 19 is a diagram illustrating a process of converting coordinate information of a standard body model into coordinate information of a virtual body model for a current surgery subject, according to an exemplary embodiment.
- the computer may apply segmentation to a standard body model to extract each organ and calculate a reference position for each organ.
- the computer may apply segmentation to the virtual body model of the current surgical subject, extract each organ, and calculate a reference position for each organ.
- the computer may apply the transformation matrix T by matching the reference positions of the corresponding organs between the standard body model and the virtual body model.
- the transformation matrix may be linearly transformed or non-linearly transformed according to an embodiment.
- the computer can transform the coordinate information between the standard body model and the virtual body model by applying the transformation matrix. Therefore, the computer can finally derive the camera position information converted into coordinate information on the virtual body model.
- 20 and 21 are diagrams for explaining a process of deriving position information on a standard body model from a surgical image through learning according to an embodiment of the present invention.
- the computer uses a deep learning-based output of camera position information (x, y, z, r x , r y , r z ) in a standard body model space as an input of a surgical image 400.
- Learning can be done in a CNN way.
- the computer may acquire a surgical image 400 for a new patient as an input value.
- the surgical image 400 may be an image corresponding to a specific position inside the body of the new patient.
- the computer may search the position information (x, y, z, r x , r y , r z ) on the standard body model with respect to the input surgical image 400 through the CNN learning model 410.
- the surgical image of the existing patient and the position information on the standard body model corresponding to the surgical image may be used as the training data set.
- the computer uses the deep learning-based output of the camera position information (x, y, z, r x , r y , r z ) in the virtual body model space of the patient as the input of the surgical image 400. Learning can be done in a CNN way.
- the computer applies a transformation matrix between the virtual body model and the standard model of the patient, so that the camera position information (x, y, z, r x , r y , r z ) in the patient's virtual body model space is standard body model.
- an existing patient's surgical image, location information on the patient's virtual body model, and a transformation matrix between the standard body model and the virtual body model can be used as a training data set, and also between the standard body model and the virtual body model.
- the position on the virtual body model and the position on the standard body model through the transformation matrix can be constructed as the final training data set.
- a computer may receive a surgical image 400 through a CNN 410 and output coordinate information 420 on a standard body model space.
- the computer may convert the coordinate information 420 on the output standard body model space into coordinate information 430 about the virtual body model of the patient using the transformation matrix T. That is, the computer may perform a training and test process to determine whether the value to be finally predicted from the training data is correctly inferred.
- the virtual body model in the present invention may be 3D modeling data generated based on medical image data previously photographed inside the body of the patient.
- the model may be modeled in accordance with the body of the surgical subject, and may be corrected to the same state as the actual surgical state.
- the computer may calculate the camera position information in the actual body of the current surgery subject based on the camera position information on the virtual body model derived in step S120 at the time of surgery on the current surgery subject. For example, when the computer receives a current surgical image photographing the current surgical scene during the actual surgery on the current surgical subject, the computer performs the above-described steps S100 to S120 to correspond to the viewpoint of the current surgical image received. Camera position information on the virtual body model can be calculated. Since the virtual body model is modeled according to the current operation subject's body and implemented in the same state as during the actual operation, the computer recognizes the position information of the current camera on the virtual body model. Information on the location of the location can be calculated.
- the computer may guide the movement path of the camera on the surgical path of the current surgical subject based on the camera position information on the virtual body model derived in step S120. For example, the computer may recognize that the surgery of the first area is terminated on the surgical path of the current surgical subject. In addition, the computer may recognize that the surgery is performed to the second region, which is the next surgical region of the first region, and thus guide the position information of the second region. For example, when the current surgical image is output through the screen, information indicating the moving path of the camera (for example, the moving time and the moving direction of the camera) may be output together on the screen of the current surgical image.
- the computer may recognize the end of the operation in the corresponding region (point) by image recognition, or as auxiliary information other than the surgical image.
- the cue sheet data includes data in which the actual surgery process is arranged in order according to time based on the minimum surgical operation unit, and may include surgical information corresponding to the minimum surgical operation unit. Therefore, the computer can recognize the operation operation information of the corresponding surgery area from the cue sheet data and recognize the end of the operation operation based on this.
- the cue sheet data is composed of a hierarchical structure classified from the upper surgical operation to the lower surgical operation, even if the same surgical operation appears several times, the cue sheet data can be used to determine where the surgical operation belongs in the hierarchy. It is possible to recognize the end of the surgical operation relatively accurately.
- the computer may display the movement path of the camera on the virtual body model. For example, a movement path that is gradually moved from the first surgical region to the second surgical region on the surgical path of the current surgical subject may be displayed on the virtual body model.
- the computer can recognize the movement time of the camera by recognizing the behavior just before moving to the next surgery area (surgery step) from the surgery image.
- the computer may recognize the action just before moving to the next surgery area (surgery step) from the cue sheet data, so that the camera may grasp the movement time.
- the computer when performing a minimally invasive surgery such as laparoscopic surgery or robot surgery, it is possible to add a function related to the movement path of the camera to the laparoscopic or surgical robot.
- a function related to the movement path of the camera For example, when the operation in the first surgical area is terminated on the surgical path of the current surgical subject, the computer may provide a function of automatically setting the position of the camera to the next surgical area (surgery step).
- the camera coordinate information in the three-dimensional space can be efficiently provided with only the surgical image.
- the learning model since the learning model only needs to be constructed for the standard body model without having to build and learn the learning model for each patient individually, the process of building the learning model and performing the learning can be efficiently performed.
- 22 is a diagram schematically illustrating a system capable of performing robot surgery according to an embodiment of the present invention.
- the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
- surgical robot 34 includes imaging device 36 and surgical instrument 38.
- the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
- the server 100 is a computing device including at least one processor and a communication unit.
- the controller 30 includes a computing device including at least one processor and a communication unit.
- the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image photographed by the photographing apparatus 36 is displayed on the display 340.
- surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
- Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
- the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
- the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
- the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
- the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
- the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
- the surgical information ie, the surgical image
- the actual position information of the camera may be provided in the surgery currently being performed on the specific patient.
- FIGS. 15 to 21 may not be applicable only in connection with the robotic surgery system illustrated in FIG. 22, and may be implemented in various forms in all kinds of embodiments to which the present invention may be applied. .
- FIG. 23 is a diagram schematically illustrating a configuration of an apparatus 500 for performing a surgical image-based camera position providing method according to an embodiment of the present invention.
- the processor 510 may include a connection passage (eg, a bus or the like) for transmitting and receiving a signal with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
- a connection passage eg, a bus or the like
- a graphic processor not shown
- / or other components May be included.
- the processor 510 executes one or more instructions stored in the memory 520 to perform a camera position providing method based on the surgical image described with reference to FIGS. 15 to 21.
- the processor 510 executes one or more instructions stored in the memory 520 to acquire a surgical image taken as the camera enters the body and moves the surgical path, and based on the surgical image, a standard body
- the camera position information on the model may be derived, and the camera position information on the virtual body model of the current surgical subject may be provided based on the camera position information on the standard body model.
- the processor 510 may include random access memory (RAM) and read-only memory (ROM) for temporarily and / or permanently storing a signal (or data) processed in the processor 510. , Not shown) may be further included.
- the processor 510 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
- SoC system on chip
- the memory 520 may store programs (one or more instructions) for processing and controlling the processor 510. Programs stored in the memory 520 may be divided into a plurality of modules according to their functions.
- the method for providing a surgical image-based camera position may be implemented as a program (or an application) and stored in a medium in order to be executed in combination with a computer which is hardware.
- the program may be read by the computer's processor (CPU) through the device interface of the computer in order for the computer to read the program and execute the methods implemented as the program.
- Code that is coded in a computer language such as C, C ++, JAVA, or machine language.
- Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
- the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
- the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
- the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
- examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable ROM
- EEPROM electrically erasable programmable ROM
- flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
Applications Claiming Priority (14)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR20180019868 | 2018-02-20 | ||
| KR10-2018-0019866 | 2018-02-20 | ||
| KR20180019867 | 2018-02-20 | ||
| KR20180019866 | 2018-02-20 | ||
| KR10-2018-0019867 | 2018-02-20 | ||
| KR10-2018-0019868 | 2018-02-20 | ||
| KR10-2018-0098359 | 2018-08-23 | ||
| KR1020180098360A KR102014355B1 (ko) | 2018-02-20 | 2018-08-23 | 수술도구의 위치 정보 산출 방법 및 장치 |
| KR1020180098359A KR102014359B1 (ko) | 2018-02-20 | 2018-08-23 | 수술영상 기반 카메라 위치 제공 방법 및 장치 |
| KR10-2018-0098360 | 2018-08-23 | ||
| KR1020180130229A KR102013866B1 (ko) | 2018-02-20 | 2018-10-29 | 실제수술영상을 이용한 카메라 위치 산출 방법 및 장치 |
| KR10-2018-0130229 | 2018-10-29 | ||
| KR20180139494 | 2018-11-14 | ||
| KR10-2018-0139494 | 2018-11-14 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019164275A1 true WO2019164275A1 (fr) | 2019-08-29 |
Family
ID=67686850
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2019/002093 Ceased WO2019164275A1 (fr) | 2018-02-20 | 2019-02-20 | Procédé et dispositif pour reconnaître la position d'un instrument chirurgical et caméra |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2019164275A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111127436A (zh) * | 2019-12-25 | 2020-05-08 | 北京深测科技有限公司 | 一种用于桥梁的位移检测预警方法 |
| CN114224512A (zh) * | 2021-12-30 | 2022-03-25 | 上海微创医疗机器人(集团)股份有限公司 | 碰撞检测方法、装置、设备、可读存储介质和程序产品 |
| CN114494602A (zh) * | 2022-02-10 | 2022-05-13 | 苏州微创畅行机器人有限公司 | 碰撞检测方法、系统、计算机设备和存储介质 |
| CN114697545A (zh) * | 2020-12-29 | 2022-07-01 | 财团法人工业技术研究院 | 可移动摄影系统和摄影构图控制方法 |
| CN116327365A (zh) * | 2023-05-22 | 2023-06-27 | 北京迈迪斯医疗技术有限公司 | 基于电磁定位的活检系统及导航方法 |
| US11928834B2 (en) | 2021-05-24 | 2024-03-12 | Stryker Corporation | Systems and methods for generating three-dimensional measurements using endoscopic video data |
| US11957301B2 (en) * | 2011-08-21 | 2024-04-16 | Asensus Surgical Europe S.à.R.L. | Device and method for assisting laparoscopic surgery—rule based approach |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010200894A (ja) * | 2009-03-02 | 2010-09-16 | Tadashi Ukimura | 手術支援システム及び手術ロボットシステム |
| KR20120111871A (ko) * | 2011-03-29 | 2012-10-11 | 삼성전자주식회사 | 3차원적 모델을 이용한 신체 장기의 영상 생성 방법 및 장치 |
| KR20140020071A (ko) * | 2012-08-07 | 2014-02-18 | 삼성전자주식회사 | 수술 로봇 시스템 및 그 제어방법 |
| KR20150043245A (ko) * | 2012-08-14 | 2015-04-22 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | 다중 비전 시스템의 정합을 위한 시스템 및 방법 |
| KR20160086629A (ko) * | 2015-01-12 | 2016-07-20 | 한국전자통신연구원 | 영상유도 수술에서 수술부위와 수술도구 위치정합 방법 및 장치 |
-
2019
- 2019-02-20 WO PCT/KR2019/002093 patent/WO2019164275A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2010200894A (ja) * | 2009-03-02 | 2010-09-16 | Tadashi Ukimura | 手術支援システム及び手術ロボットシステム |
| KR20120111871A (ko) * | 2011-03-29 | 2012-10-11 | 삼성전자주식회사 | 3차원적 모델을 이용한 신체 장기의 영상 생성 방법 및 장치 |
| KR20140020071A (ko) * | 2012-08-07 | 2014-02-18 | 삼성전자주식회사 | 수술 로봇 시스템 및 그 제어방법 |
| KR20150043245A (ko) * | 2012-08-14 | 2015-04-22 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | 다중 비전 시스템의 정합을 위한 시스템 및 방법 |
| KR20160086629A (ko) * | 2015-01-12 | 2016-07-20 | 한국전자통신연구원 | 영상유도 수술에서 수술부위와 수술도구 위치정합 방법 및 장치 |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11957301B2 (en) * | 2011-08-21 | 2024-04-16 | Asensus Surgical Europe S.à.R.L. | Device and method for assisting laparoscopic surgery—rule based approach |
| CN111127436A (zh) * | 2019-12-25 | 2020-05-08 | 北京深测科技有限公司 | 一种用于桥梁的位移检测预警方法 |
| CN111127436B (zh) * | 2019-12-25 | 2023-10-20 | 北京深测科技有限公司 | 一种用于桥梁的位移检测预警方法 |
| CN114697545A (zh) * | 2020-12-29 | 2022-07-01 | 财团法人工业技术研究院 | 可移动摄影系统和摄影构图控制方法 |
| CN114697545B (zh) * | 2020-12-29 | 2023-10-13 | 财团法人工业技术研究院 | 可移动摄影系统和摄影构图控制方法 |
| US11928834B2 (en) | 2021-05-24 | 2024-03-12 | Stryker Corporation | Systems and methods for generating three-dimensional measurements using endoscopic video data |
| US12475586B2 (en) | 2021-05-24 | 2025-11-18 | Stryker Corporation | Systems and methods for generating three-dimensional measurements using endoscopic video data |
| CN114224512A (zh) * | 2021-12-30 | 2022-03-25 | 上海微创医疗机器人(集团)股份有限公司 | 碰撞检测方法、装置、设备、可读存储介质和程序产品 |
| CN114224512B (zh) * | 2021-12-30 | 2023-09-19 | 上海微创医疗机器人(集团)股份有限公司 | 碰撞检测方法、装置、设备、可读存储介质和程序产品 |
| CN114494602A (zh) * | 2022-02-10 | 2022-05-13 | 苏州微创畅行机器人有限公司 | 碰撞检测方法、系统、计算机设备和存储介质 |
| CN116327365A (zh) * | 2023-05-22 | 2023-06-27 | 北京迈迪斯医疗技术有限公司 | 基于电磁定位的活检系统及导航方法 |
| CN116327365B (zh) * | 2023-05-22 | 2023-08-01 | 北京迈迪斯医疗技术有限公司 | 基于电磁定位的活检系统及导航方法 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019164275A1 (fr) | Procédé et dispositif pour reconnaître la position d'un instrument chirurgical et caméra | |
| WO2015002409A1 (fr) | Procédé de partage d'informations dans une imagerie ultrasonore | |
| WO2015093724A1 (fr) | Méthode et appareil permettant de fournir des données d'analyse de vaisseaux sanguins en utilisant une image médicale | |
| WO2020185003A1 (fr) | Procédé d'affichage d'image ultrasonore, dispositif de diagnostic ultrasonore et produit programme d'ordinateur | |
| WO2018048054A1 (fr) | Procédé et dispositif de production d'une interface de réalité virtuelle sur la base d'une analyse d'image 3d à caméra unique | |
| WO2018097641A1 (fr) | Appareil à rayons x et procédé d'acquisition d'images médicales associé | |
| WO2019164271A1 (fr) | Procédé et dispositif de génération de modèle de corps humain virtuel | |
| WO2016190517A1 (fr) | Appareil d'affichage d'image médicale et procédé de fourniture d'interface utilisateur | |
| WO2019083227A1 (fr) | Procédé de traitement d'image médicale, et appareil de traitement d'image médicale mettant en œuvre le procédé | |
| WO2017030276A1 (fr) | Dispositif d'affichage d'image médicale et procédé de traitement d'image médicale | |
| WO2016043411A1 (fr) | Appareil d'imagerie à rayons x et procédé de balayage associé | |
| WO2011040769A2 (fr) | Dispositif de traitement d'images chirurgicales, procédé de traitement d'images, procédé de manipulation laparoscopique, système de robot chirurgical et procédé de limitation des opérations correspondant | |
| WO2014200289A2 (fr) | Appareil et procédé de fourniture d'informations médicales | |
| WO2017142183A1 (fr) | Appareil de traitement d'image, procédé de traitement d'image et support d'enregistrement enregistrant celle-ci | |
| WO2014200265A1 (fr) | Procédé et appareil pour présenter des informations médicales | |
| WO2021242050A1 (fr) | Procédé de traitement d'image buccale, dispositif de diagnostic buccal pour effectuer une opération en fonction de ce dernier et support de mémoire lisible par ordinateur dans lequel est stocké un programme pour la mise en œuvre du procédé | |
| WO2021157851A1 (fr) | Appareil de diagnostic ultrasonore et procédé pour le faire fonctionner | |
| WO2015076508A1 (fr) | Procédé et appareil d'affichage d'image ultrasonore | |
| WO2015102391A1 (fr) | Procédé de génération d'image pour analyser la position d'élan de golf d'un utilisateur au moyen d'une analyse d'image de profondeur, et procédé et dispositif pour analyser une position d'élan de golf à l'aide de celui-ci | |
| WO2019164274A1 (fr) | Procédé et dispositif de génération de données d'apprentissage | |
| EP3071113A1 (fr) | Procédé et appareil d'affichage d'image ultrasonore | |
| WO2016006765A1 (fr) | Dispositif à rayons x | |
| WO2019164270A1 (fr) | Procédé et dispositif d'optimisation chirurgicale | |
| WO2023182727A1 (fr) | Procédé de vérification d'image, système de diagnostic l'exécutant, et support d'enregistrement lisible par ordinateur sur lequel le procédé est enregistré | |
| WO2019132166A1 (fr) | Procédé et programme d'affichage d'image d'assistant chirurgical |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19757997 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19757997 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 12.02.2021) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19757997 Country of ref document: EP Kind code of ref document: A1 |