WO2019164273A1 - Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale - Google Patents
Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale Download PDFInfo
- Publication number
- WO2019164273A1 WO2019164273A1 PCT/KR2019/002091 KR2019002091W WO2019164273A1 WO 2019164273 A1 WO2019164273 A1 WO 2019164273A1 KR 2019002091 W KR2019002091 W KR 2019002091W WO 2019164273 A1 WO2019164273 A1 WO 2019164273A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgery
- image
- surgical
- time
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
Definitions
- the present invention relates to a method and apparatus for predicting a surgery time based on a surgical image.
- Laparoscopic surgery refers to surgery performed by medical staff to see and touch the part to be treated.
- Minimally invasive surgery is also known as keyhole surgery, and laparoscopic surgery and robotic surgery are typical.
- laparoscopic surgery a small hole is made in a necessary part without opening, and a laparoscopic with a special camera is attached and a surgical tool is inserted into the body and observed through a video monitor.
- Microsurgery is performed using a laser or a special instrument.
- robot surgery is to perform minimally invasive surgery using a surgical robot.
- radiation surgery refers to surgical treatment with radiation or laser light outside the body.
- the problem to be solved by the present invention is to provide a method and apparatus for predicting the operation time based on the surgical image.
- the problem to be solved by the present invention is to provide a method and apparatus for predicting the operation time for each surgical step by performing the learning based on the surgical image.
- the problem to be solved by the present invention is to provide a method and apparatus for estimating the remaining surgery time required to perform the remaining process remaining in the current surgical stage through the surgical image obtained in real time.
- the problem to be solved by the present invention is to provide a method and apparatus for generating a variety of learning data to be used to predict the operation time based on the surgical image obtained through the actual operation of the patient.
- a method for predicting a surgery time based on a surgery image performed by a computer includes: obtaining a preset surgery image including a surgery operation for a specific surgery stage, the preset surgery image, and Generating learning data using the surgery time obtained based on the predetermined surgery image, and performing the learning based on the learning data to predict the surgery time in the specific surgery stage.
- the generating of the learning data may include obtaining a first surgery time based on the preset surgery image, and removing a predetermined section from the preset surgery image. (censored) generating a surgical image, acquiring a second surgery time based on the intermediate cut surgery image, and the preset surgery image based on the first surgery time and the second surgery time And generating at least one of the intermediately cut surgical images as the learning data.
- the step of performing the learning, when the predetermined surgical image is generated as the learning data, based on each image frame and the first surgery time in the predetermined surgical image Learning can be done.
- the step of obtaining the actual surgery image by performing the specific surgery step in the actual surgery process further comprising the step of predicting the operation time, the image of the current time point in the actual surgery image Acquiring an operation time required up to the present time required to perform the specific operation step based on a frame, and an average operation time required for the specific operation step predicted through the learning and an operation time required up to the current time point
- the method may include estimating the remaining surgery time required to perform the specific surgery step after the current time based on time.
- the obtaining of the predetermined surgical image it is possible to obtain a predetermined surgical image performing the specific surgical step, respectively from a plurality of patients.
- the specific surgical step may be any one of the surgical steps belonging to a specific hierarchy in the surgical process consisting of a hierarchical structure according to the operation.
- An apparatus includes a memory for storing one or more instructions, and a processor for executing the one or more instructions stored in the memory, wherein the processor executes the one or more instructions to perform a specific surgical step. Acquiring a predetermined surgical image including a surgical operation for the step, generating learning data using a surgery time required based on the predetermined surgical image and the predetermined surgical image, and the learning data Learning is performed based on the step of predicting the operation time in the specific surgery stage.
- a computer program according to an embodiment of the present invention is combined with a computer, which is hardware, and stored in a computer-readable recording medium to perform a method of predicting an operation time based on the surgery image.
- the present invention it is possible to predict the operation time and the operation remaining time for each operation stage based on the surgical image. Providing the estimated time required for surgery and the time remaining for the operation to the medical staff, it is possible to more accurately understand the current situation of the operation is progressing through it, it is possible to proceed efficiently with the remaining surgery.
- the present invention it is possible to provide the time required for surgery and the operation remaining time in real time, through which an anesthetic dosage can be calculated. In addition, it is effective to determine the appropriate additional anesthesia dose according to the remaining surgery time.
- the present invention it is possible to generate a variety of learning data on the basis of the surgical image obtained through the actual surgical process of the patient, through which the learning for the prediction of the surgery time can be performed more effectively.
- the learning using a variety of learning data it is possible to derive the correct prediction operation time for each surgical step.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- FIG. 2 is a flowchart schematically illustrating a method for predicting a surgery time based on a surgery image according to an exemplary embodiment of the present invention.
- FIG. 3 is a view for explaining a process of generating learning data based on a surgical image according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a process of performing learning based on learning data according to an embodiment of the present invention.
- FIG. 5 is a view schematically showing the configuration of an apparatus 200 for performing a method for predicting a surgery time based on a surgery image according to an embodiment of the present invention.
- a “part” or “module” refers to a hardware component such as software, FPGA, or ASIC, and the “part” or “module” plays certain roles. However, “part” or “module” is not meant to be limited to software or hardware.
- the “unit” or “module” may be configured to be in an addressable storage medium or may be configured to play one or more processors.
- a “part” or “module” may include components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, Procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. Functions provided within components and “parts” or “modules” may be combined into smaller numbers of components and “parts” or “modules” or into additional components and “parts” or “modules”. Can be further separated.
- a computer includes all the various devices capable of performing arithmetic processing to provide a result to a user.
- a computer can be a desktop PC, a notebook, as well as a smartphone, a tablet PC, a cellular phone, a PCS phone (Personal Communication Service phone), synchronous / asynchronous The mobile terminal of the International Mobile Telecommunication-2000 (IMT-2000), a Palm Personal Computer (PC), a Personal Digital Assistant (PDA), and the like may also be applicable.
- a head mounted display (HMD) device includes a computing function
- the HMD device may be a computer.
- the computer may correspond to a server that receives a request from a client and performs information processing.
- FIG. 1 is a schematic diagram of a system capable of performing robot surgery according to an embodiment of the present invention.
- the robotic surgical system includes a medical imaging apparatus 10, a server 100, a control unit 30 provided in an operating room, a display 32, and a surgical robot 34.
- the medical imaging apparatus 10 may be omitted in the robot surgery system according to the disclosed embodiment.
- surgical robot 34 includes imaging device 36 and surgical instrument 38.
- the robot surgery is performed by the user controlling the surgical robot 34 using the control unit 30. In one embodiment, the robot surgery may be automatically performed by the controller 30 without the user's control.
- the server 100 is a computing device including at least one processor and a communication unit.
- the controller 30 includes a computing device including at least one processor and a communication unit.
- the control unit 30 includes hardware and software interfaces for controlling the surgical robot 34.
- the imaging device 36 includes at least one image sensor. That is, the imaging device 36 includes at least one camera device and is used to photograph an object, that is, a surgical site. In one embodiment, the imaging device 36 includes at least one camera coupled with a surgical arm of the surgical robot 34.
- the image photographed by the photographing apparatus 36 is displayed on the display 340.
- surgical robot 34 includes one or more surgical tools 38 that can perform cutting, clipping, fixing, grabbing operations, and the like, of the surgical site.
- Surgical tool 38 is used in conjunction with the surgical arm of the surgical robot 34.
- the controller 30 receives information necessary for surgery from the server 100 or generates information necessary for surgery and provides the information to the user. For example, the controller 30 displays the information necessary for surgery, generated or received, on the display 32.
- the user performs the robot surgery by controlling the movement of the surgical robot 34 by manipulating the control unit 30 while looking at the display 32.
- the server 100 generates information necessary for robotic surgery using medical image data of an object previously photographed from the medical image photographing apparatus 10, and provides the generated information to the controller 30.
- the controller 30 displays the information received from the server 100 on the display 32 to provide the user, or controls the surgical robot 34 by using the information received from the server 100.
- the means that can be used in the medical imaging apparatus 10 is not limited, for example, other various medical image acquisition means such as CT, X-Ray, PET, MRI may be used.
- the present invention is to perform the learning using the surgical image or the surgical data that can be obtained in the surgical process as a learning material, and to provide a method that can be utilized in the surgical process of other patients through such learning.
- Computer performs a method of predicting a surgery time based on a surgery image according to an embodiment disclosed herein.
- Computer may mean the server 100 or the controller 30 of FIG. 1, but is not limited thereto and may be used to encompass a device capable of performing computing processing.
- the computer may be a computing device provided separately from the device shown in FIG. 1.
- the embodiments disclosed below may not be applicable only in connection with the robotic surgery system illustrated in FIG. 1, but may be applied to all kinds of embodiments that may acquire and utilize a surgical image in a surgical procedure.
- robotic surgery it can be applied in connection with minimally invasive surgery such as laparoscopic surgery or endoscopic surgery.
- FIG. 2 is a flowchart schematically illustrating a method for predicting a surgery time based on a surgery image according to an exemplary embodiment of the present invention.
- the method for predicting the operation time based on the surgical image the step of obtaining a predetermined surgical image including a surgical operation for a specific surgical step (S100), Generating learning data using a predetermined surgery image and a surgery time required based on the predetermined surgery image (S200), and performing a learning based on the learning data to perform a surgery time in the specific surgery stage. It may include the step (S300) to predict.
- S100 a surgical operation for a specific surgical step
- Generating learning data using a predetermined surgery image and a surgery time required based on the predetermined surgery image (S200) the step (S300) to predict.
- the computer may acquire a predetermined surgical image including a surgical operation for a specific surgical step (S100).
- the medical staff may perform the actual surgery on the patient directly, as well as laparoscopes or endoscopes, including the surgical robot as described in FIG. Minimally invasive surgery may be performed.
- the computer may acquire a surgical image photographing a scene including a surgical operation performed in the surgical procedure, a surgical tool related thereto, a surgical site, and the like.
- the computer may acquire a surgical image photographing a scene including a surgical site and a surgical tool that is currently undergoing surgery from a camera entering the patient's body.
- each surgery eg, gastric cancer surgery, colorectal cancer surgery, etc.
- each surgery may be performed in different surgical procedures, but the same type of surgery may be performed in the same or similar series of surgical procedures. That is, when performing a specific surgery (for example, the same type of surgery, such as gastric cancer surgery) for a plurality of patients, it is possible to proceed with a specific surgical procedure by performing the same or similar surgical operations for each patient.
- a particular surgery may consist of predefined surgical procedures (ie, surgical steps) according to classification criteria.
- the computer may classify the surgical stages according to the time course of a specific surgery or classify the surgical stages corresponding to each surgical site based on the surgical site during the specific surgery.
- the computer may classify the surgical steps based on the position of the camera or the moving range of the camera during the specific surgery, or the surgical steps based on the change (eg replacement) of the surgical tool during the specific surgery. You can also classify them.
- the computer may classify each operation stage according to a specific classification criterion, and then define in advance the operation operations performed in each classified operation stage.
- the surgical steps constituting a particular surgery may be classified into a hierarchical structure.
- certain surgical procedures may be classified step by step from the lowest hierarchy (level) to the highest hierarchy (level) to form a hierarchical structure.
- the lowest layer is composed of the smallest units representing the surgical procedure, and may include a minimum operation operation having a meaning as one minimum operation.
- the computer may recognize a surgical operation that represents one constant motion pattern as a minimum operation motion such as cutting, grabbing, moving, etc., and configure it as the lowest hierarchy (ie, the lowest surgical step).
- the computer may be configured in a higher layer by grouping at least one minimal surgery operation according to a specific classification criteria (eg, time course, surgical site, camera movement, change of surgical tool, etc.). For example, when each of the minimum surgery operations indicating grabbing, moving, cutting, and the like of a surgical tool is performed in a predetermined order, and the predetermined order is connected, the minimum operation can be grouped into one higher layer when it has meaning as a specific operation. have. As another example, when each minimal surgical operation such as clipping, moving, or cutting is performed continuously, it may be recognized that this is a surgical operation for cutting blood vessels and may be grouped into one higher layer.
- a specific classification criteria eg, time course, surgical site, camera movement, change of surgical tool, etc.
- each minimal surgical operation such as grabbing, lifting, cutting, or kicking
- this is a surgical operation to remove fat and grouped into one higher layer.
- an upper operation operation ie, an upper operation step
- lower operation procedures that is, lower operation steps
- a hierarchical structure can be finally formed up to the highest hierarchy (ie, the highest surgical stage) of a specific surgical procedure.
- a particular surgery may be represented in a hierarchical structure, such as a tree, of the surgical steps constituting the specific surgery.
- the surgical image obtained in step S100 is obtained by performing a specific surgery on the patient, and may include image frames photographing any one of the surgical steps belonging to a specific layer in a specific surgical process having a hierarchical structure.
- the specific layer may mean a middle layer other than the lowest layer and the highest layer in the hierarchical structure.
- the surgical image obtained in step S100 may be composed of image frames including a series of predefined surgical operations for a particular surgical step of a particular surgery.
- the computer may obtain each surgical image corresponding to a specific surgery stage predefined in a specific surgery procedure from each of the plurality of patients.
- each of the surgical images obtained from the plurality of patients may be set to image frames including predetermined surgical operations performed at a specific surgical stage.
- a medical staff or a computer extracts only image frames corresponding to a specific surgical step from the surgical image. It may be generated in advance.
- a surgical image set with image frames including predetermined surgical operations performed at a specific surgical step is referred to as a predetermined surgical image.
- the computer may generate the learning data using the surgery time obtained based on the preset surgery image and the preset surgery image acquired in step S100 (S200).
- the computer may acquire the first surgery time based on the predetermined surgery image.
- the computer may generate a censored surgical image from which a predetermined section is removed from the preset surgical image, and acquire a second surgery time based on the censored surgical image.
- the computer may generate, as learning data, at least one of a predetermined surgery image and a middle-cut surgery image based on the first surgery time and the second surgery time. A detailed process thereof will be described with reference to FIG. 3.
- the present invention obtains a surgical image including the surgical operations performed in a specific surgical stage, and configured the learning data to perform the learning to more accurately predict the operating time required in each surgical stage Provide a method.
- the present invention provides a method for estimating the remaining surgery time based on the surgery image of the current time point obtained during the actual surgery.
- FIG. 3 is a view for explaining a process of generating learning data based on a surgical image according to an embodiment of the present invention.
- the computer may acquire predetermined surgical images from a plurality of patients (first patient to nth patient), respectively.
- each of the predetermined surgical images obtained from the plurality of patients even if the same operation steps are performed according to the patient's condition or the operation method of the medical staff instructing the operation as described above, the image frames included in each surgical image is different There can be.
- the present invention configures the learning data for learning based on a predetermined surgical image obtained from the patient, and by learning this to accurately predict the operation time in the corresponding surgical image (ie, the corresponding surgical stage).
- the computer may generate learning data by applying a survival analysis technique. That is, the computer may apply the survival analysis technique to analyze the time required for the operation based on the surgical image and generate the learning data based on this.
- the computer may generate a predetermined surgical image and a middle-cut surgical image in which random sections are removed therefrom (S210).
- the intermediately cut surgical image may be an image from which at least one image frame corresponding to an arbitrary section is removed from the predetermined surgical image.
- the predetermined surgery image may include an image frame of another surgery stage (eg, before or after surgery), or may include an additional surgery movement that is not a required surgery in the surgery stage. .
- the computer since the predetermined surgical image may not be an image including only the essential surgical operations of the corresponding surgical step, the computer generates a middle-cut surgical image in which a predetermined section is removed from the predetermined surgical image.
- the arbitrary section may be determined through a random function, or the section including the front or rear image frame of the surgical image may be determined as the random section.
- the corresponding section May be determined as an arbitrary interval.
- the computer may generate the first to n-th predetermined surgical images and the first to n-th cutaway surgical images respectively obtained from the first to nth patients.
- the midway cut surgical images are represented by image frames included in a box indicated by a dotted line.
- the computer acquires an operation time required to perform a specific surgery step based on a predetermined surgery image (hereinafter, referred to as a first surgery time), and is required to perform a specific surgery step based on the intermediately cut surgery image. It is possible to obtain the operation time required (hereinafter, the second operation time) (S220).
- the computer may include a first surgery time (eg, an actual duration of FIG. 3) of the first to n-th predetermined surgery images, and a second surgery to the first to n-th intermediate surgery images.
- a time duration (eg, Censored Duration of FIG. 3) may be obtained and compared.
- each operation time may be a time required when all the image frames in each surgical image is performed. For example, it may be a playback time of each surgical image.
- the computer may generate learning data using at least one of a predetermined surgery image and / or a mid-cut surgery image based on the first surgery time and the second surgery time (S230).
- the computer may include a first surgery time of the first predetermined surgery image (eg, Actual Duration of FIG. 3) and a second surgery time of the first censored surgery image (eg, FIG.
- a first surgery time of the first predetermined surgery image eg, Actual Duration of FIG. 3
- a second surgery time of the first censored surgery image eg, FIG.
- the computer may generate the learning data by mapping the surgery image (eg, the first predetermined surgery image) determined by the learning data from the first patient and the surgery time for the same.
- the computer in determining a surgical image to be used as learning data by comparing the first surgery time with the second surgery time, may use the length of the surgery time or may use a predetermined criterion. Or you may select arbitrarily. For example, the computer may select a surgical image having a shorter time between the first surgery time and the second surgery time. Alternatively, the difference between the first surgery time and the second surgery time may be used. For example, when the difference between the first surgery time and the second surgery time is more than a predetermined reference value, the computer may select the surgery image having the first surgery time as learning data.
- the computer is based on a result of comparing the first surgery time of the first to n-th predetermined surgery images and the second surgery time of the first to n-th intermediate cut surgery images, respectively.
- a predetermined surgical image or a middle-cut surgical image it may be configured as the learning data for a specific surgical step by mapping the corresponding operation time. For example, when a predetermined surgical image is selected, the computer may set the surgical status as “event” and record the surgical duration of the selected surgical image as learning data. Alternatively, when a mid-severed surgical image is selected, the computer may set the surgical status to "censored" and record the surgical duration of the selected surgical image as learning data. That is, the computer may generate a learning data set by using a predetermined surgical image or a middle-cut surgical image obtained from a plurality of patients undergoing surgery for a specific surgical stage.
- the computer may predict the operation time in a specific surgery step by performing the learning based on the learning data generated in step S200 (S300).
- the computer may perform the learning by using each image frame in the learning data and the operation time required to perform a specific surgery step based on each image frame.
- the computer can learn the average time required for surgery at a particular stage of surgery. For example, when the preset surgical image is generated as learning data, the computer may perform the learning based on each image frame in the preset surgical image and the first surgery time thereof. Alternatively, in the case where the severed surgical image is generated as the learning data, the computer may perform the learning based on each image frame in the severed surgical image and the second surgery time. A detailed process thereof will be described with reference to FIG. 4.
- FIG. 4 is a diagram illustrating a process of performing learning based on learning data according to an embodiment of the present invention.
- the computer may acquire learning data and perform the learning using deep learning.
- the learning data may be generated through the steps S210 to S230, and may be generated based on a predetermined surgical image or a middle-cut surgical image.
- the computer may receive the training data and extract each image frame L 1 to L N from the training data (S310).
- the computer may extract each image frame included in the preset surgical image from the input training data.
- the computer may extract each image frame included in the half-cut surgical image.
- the computer may learn from the extracted image frames by using a convolutional neural network (CNN), and as a result, may extract feature information about each image frame (S320).
- CNN convolutional neural network
- the computer may learn characteristics of the surgical image by inputting each image frame to at least one layer (eg, a convolution layer). As a result of this learning, the computer can infer what each image frame represents or represents.
- a convolution layer e.g., a convolution layer
- the computer may perform learning using a recurrent neural network (RNN) for each image frame derived as a result of learning using the CNN (S330).
- RNN recurrent neural network
- the computer may receive learning data in units of frames and perform learning using an RNN (eg, an LSTM method).
- the characteristic information of each image frame may be input, and the learning may be performed using the operation state information (eg, event / censored) of the corresponding learning data and the operation time required for the corresponding learning data.
- the computer may perform the learning by connecting the at least one image frame of the previous time point with the image frame of the current time point. Therefore, the computer learns the relationship between each image frame based on the characteristic information of each image frame, so as to determine whether the operation includes a certain operation stage predefined in a specific operation stage or whether the operation includes operation operation of another operation stage. I can grasp the information.
- the computer may predict a median surgical duration at a specific surgical stage through the learning process using the CNN and the RNN as described above (S340).
- the computer acquires a surgical image from each of a plurality of patients who have performed a specific surgical step and generates learning data based thereon. You can apply the same learning repeatedly as described in 4. Therefore, the computer repeatedly learns a large amount of learning data, thereby predicting an average time required for surgery at a specific stage of surgery.
- step S100 ⁇ S300 by performing the above-described step S100 ⁇ S300 can finally predict the operation time required to perform each surgical step in the surgical procedure. Accordingly, the computer can build a learning model for predicting the operation time of each operation stage.
- one embodiment of the present invention can be applied to the surgical process of the other patient by using the estimated operating time in each operation stage as described above.
- the computer may obtain a real surgery image of the patient to be operated in real time.
- the computer may extract an image frame of the current point of time from the acquired actual surgical image, and recognize the operation stage currently being performed based on the extracted image frame of the current point of view. Therefore, the computer may calculate the time required for the surgery from the current surgery stage to the current time based on the image frame of the current time.
- the computer can grasp the average operation time for the current operation stage through learning to predict the operation time of each operation stage as described above. Therefore, the computer uses the average operation time of the current operation stage predicted through the learning and the operation time until the current time calculated through the image frame of the current time that the actual operation is currently being performed. It is possible to estimate the remaining surgical time required to perform the surgical operations in the current surgical stage.
- the present invention it is possible not only to predict the time required for surgery at each operation stage, but also to accurately predict the remaining time remaining after the present time. In this way, by providing the correct operation time and the remaining operation time to the medical staff can be more accurately understand the current situation of the operation is progressing through it can be efficiently carried out the residual surgery process.
- the correct operation time and the remaining time of the operation must be known in order to proceed with the operation of the patient.
- the anesthesia time during the operation is important, in the present invention can provide the operation time and the remaining operation time in real time, it is possible to calculate the correct anesthetic dosage. In addition, it is effective to determine the appropriate additional anesthesia dose according to the remaining surgery time.
- FIG. 5 is a view schematically showing the configuration of an apparatus 200 for performing a method for predicting a surgery time based on a surgery image according to an embodiment of the present invention.
- the processor 210 may include a connection passage (eg, a bus or the like) that transmits and receives signals with one or more cores (not shown) and a graphic processor (not shown) and / or other components. ) May be included.
- a connection passage eg, a bus or the like
- a graphic processor not shown
- / or other components May be included.
- the processor 210 executes one or more instructions stored in the memory 220 to perform a method of predicting a surgery time based on the surgery image described with reference to FIGS. 2 to 4.
- the processor 210 may acquire a predetermined surgery image including a surgery operation for a specific surgery stage by executing one or more instructions stored in the memory 220, the preset surgery image and the preset surgery image. Generating the learning data using the operation time obtained based on the step, and performing the learning based on the learning data may be performed to predict the operation time in the specific surgery step.
- the processor 210 is a random access memory (RAM) and a ROM (Read-Only Memory) for temporarily and / or permanently storing signals (or data) processed in the processor 210. , Not shown) may be further included.
- the processor 210 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processor, a RAM, and a ROM.
- SoC system on chip
- the memory 220 may store programs (one or more instructions) for processing and controlling the processor 210. Programs stored in the memory 220 may be divided into a plurality of modules according to their functions.
- the method of estimating the operation time based on the surgical image according to the embodiment of the present invention described above may be implemented as a program (or an application) to be executed in combination with a computer which is hardware and stored in a medium.
- the above-described program includes C, C ++, JAVA, machine language, etc. which can be read by the computer's processor (CPU) through the computer's device interface so that the computer reads the program and executes the methods implemented as the program.
- Code may be coded in the computer language of. Such code may include functional code associated with a function or the like that defines the necessary functions for executing the methods, and includes control procedures related to execution procedures necessary for the computer's processor to execute the functions according to a predetermined procedure. can do.
- the code may further include memory reference code for additional information or media required for the computer's processor to execute the functions at which location (address address) of the computer's internal or external memory should be referenced. have.
- the code may be used to communicate with any other computer or server remotely using the communication module of the computer. It may further include a communication related code for whether to communicate, what information or media should be transmitted and received during communication.
- the stored medium is not a medium for storing data for a short time such as a register, a cache, a memory, but semi-permanently, and means a medium that can be read by the device.
- examples of the storage medium include, but are not limited to, a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. That is, the program may be stored in various recording media on various servers to which the computer can access or various recording media on the computer of the user. The media may also be distributed over network coupled computer systems so that the computer readable code is stored in a distributed fashion.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable ROM
- EEPROM electrically erasable programmable ROM
- flash memory hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Pathology (AREA)
- Computer Graphics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne une méthode de prédiction d'un temps de chirurgie sur la base d'une image chirurgicale. La méthode comprend les étapes consistant à : acquérir une image de chirurgie préconfigurée comprenant une opération chirurgicale pour une étape de chirurgie spécifique; générer des données d'apprentissage en utilisant l'image de chirurgie préconfigurée et un temps requis de chirurgie obtenu sur la base de l'image de chirurgie préconfigurée; et effectuer un apprentissage sur la base des données d'apprentissage de façon à prédire un temps de chirurgie dans l'étape de chirurgie spécifique.
Applications Claiming Priority (8)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2018-0019867 | 2018-02-20 | ||
| KR10-2018-0019868 | 2018-02-20 | ||
| KR20180019868 | 2018-02-20 | ||
| KR20180019867 | 2018-02-20 | ||
| KR20180019866 | 2018-02-20 | ||
| KR10-2018-0019866 | 2018-02-20 | ||
| KR10-2018-0145157 | 2018-11-22 | ||
| KR1020180145157A KR102013828B1 (ko) | 2018-02-20 | 2018-11-22 | 수술영상을 기초로 수술시간을 예측하는 방법 및 장치 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019164273A1 true WO2019164273A1 (fr) | 2019-08-29 |
Family
ID=67688215
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2019/002091 Ceased WO2019164273A1 (fr) | 2018-02-20 | 2019-02-20 | Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2019164273A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114067957A (zh) * | 2022-01-17 | 2022-02-18 | 武汉大学 | 手术时间校正方法、装置、电子设备及存储介质 |
| WO2024088836A1 (fr) * | 2022-10-24 | 2024-05-02 | Koninklijke Philips N.V. | Systèmes et procédés d'estimation de temps par rapport à une cible à partir de caractéristiques d'image |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006077797A1 (fr) * | 2005-01-19 | 2006-07-27 | Olympus Corporation | Dispositif de suivi des donnees chirurgicales, appareil de surveillance chirurgicale et procede de traitement des donnees chirurgicales |
| JP2007122174A (ja) * | 2005-10-25 | 2007-05-17 | Olympus Medical Systems Corp | 手術予定表示システム |
| JP2011224336A (ja) * | 2010-03-31 | 2011-11-10 | Sugiura Gijutsushi Jimusho:Kk | 手術プロセス管理システム、その方法、その手術プロセス管理装置 |
| KR101302595B1 (ko) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | 수술 진행 단계를 추정하는 시스템 및 방법 |
| KR20180010721A (ko) * | 2016-07-22 | 2018-01-31 | 한국전자통신연구원 | 지능형 수술 지원 시스템 및 방법 |
-
2019
- 2019-02-20 WO PCT/KR2019/002091 patent/WO2019164273A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006077797A1 (fr) * | 2005-01-19 | 2006-07-27 | Olympus Corporation | Dispositif de suivi des donnees chirurgicales, appareil de surveillance chirurgicale et procede de traitement des donnees chirurgicales |
| JP2007122174A (ja) * | 2005-10-25 | 2007-05-17 | Olympus Medical Systems Corp | 手術予定表示システム |
| JP2011224336A (ja) * | 2010-03-31 | 2011-11-10 | Sugiura Gijutsushi Jimusho:Kk | 手術プロセス管理システム、その方法、その手術プロセス管理装置 |
| KR101302595B1 (ko) * | 2012-07-03 | 2013-08-30 | 한국과학기술연구원 | 수술 진행 단계를 추정하는 시스템 및 방법 |
| KR20180010721A (ko) * | 2016-07-22 | 2018-01-31 | 한국전자통신연구원 | 지능형 수술 지원 시스템 및 방법 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114067957A (zh) * | 2022-01-17 | 2022-02-18 | 武汉大学 | 手术时间校正方法、装置、电子设备及存储介质 |
| WO2024088836A1 (fr) * | 2022-10-24 | 2024-05-02 | Koninklijke Philips N.V. | Systèmes et procédés d'estimation de temps par rapport à une cible à partir de caractéristiques d'image |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102013828B1 (ko) | 수술영상을 기초로 수술시간을 예측하는 방법 및 장치 | |
| WO2019132168A1 (fr) | Système d'apprentissage de données d'images chirurgicales | |
| WO2020242239A1 (fr) | Système de prise en charge de diagnostic basé sur l'intelligence artificielle utilisant un algorithme d'apprentissage d'ensemble | |
| WO2021006472A1 (fr) | Procédé d'affichage de densité osseuse multiple pour établir un plan de procédure d'implant et dispositif de traitement d'image associé | |
| WO2021045367A1 (fr) | Procédé et programme informatique visant à déterminer un état psychologique par un processus de dessin du bénéficiaire de conseils | |
| WO2019132165A1 (fr) | Procédé et programme de fourniture de rétroaction sur un résultat chirurgical | |
| WO2021177771A1 (fr) | Procédé et système pour prédire l'expression d'un biomarqueur à partir d'une image médicale | |
| WO2019235828A1 (fr) | Système de diagnostic de maladie à deux faces et méthode associée | |
| WO2019117563A1 (fr) | Appareil d'analyse prédictive intégrée pour télésanté interactive et procédé de fonctionnement associé | |
| WO2019164277A1 (fr) | Procédé et dispositif d'évaluation de saignement par utilisation d'une image chirurgicale | |
| WO2022019514A1 (fr) | Appareil, procédé et support d'enregistrement lisible par ordinateur pour prise de décision à l'hôpital | |
| WO2019164273A1 (fr) | Méthode et dispositif de prédiction de temps de chirurgie sur la base d'une image chirurgicale | |
| WO2023058942A1 (fr) | Dispositif et procédé pour fournir un service d'analyse de santé buccale | |
| WO2024225789A1 (fr) | Système et méthode pour fournir un service médical complet sur la base d'images médicales | |
| WO2022173232A2 (fr) | Procédé et système pour prédire le risque d'apparition d'une lésion | |
| WO2020159276A1 (fr) | Appareil d'analyse chirurgicale et système, procédé et programme pour analyser et reconnaître une image chirurgicale | |
| WO2021201582A1 (fr) | Procédé et dispositif permettant d'analyser des causes d'une lésion cutanée | |
| WO2019240330A1 (fr) | Système de prédiction de force basé sur des images et procédé correspondant | |
| WO2021141186A1 (fr) | Système et procédé d'apprentissage de réadaptation cognitive de types multiples | |
| WO2022158843A1 (fr) | Procédé d'affinage d'image d'échantillon de tissu, et système informatique le mettant en œuvre | |
| JP4900551B2 (ja) | 標準的治療プログラムに同期した医療情報処理装置 | |
| WO2017010612A1 (fr) | Système et méthode de prédiction de diagnostic pathologique reposant sur une analyse d'image médicale | |
| WO2021107306A1 (fr) | Dispositif de recommandation automatique d'opinion de médecin sur une image médicale | |
| WO2019164278A1 (fr) | Procédé et dispositif permettant d'obtenir des informations chirurgicales à l'aide d'une image chirurgicale | |
| WO2020022825A1 (fr) | Procédé et dispositif électronique de détection d'état de santé assistée par intelligence artificielle (ai) dans un réseau de l'internet des objets |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19756625 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19756625 Country of ref document: EP Kind code of ref document: A1 |