WO2023038127A1 - 推論装置、情報処理方法、及びコンピュータプログラム - Google Patents
推論装置、情報処理方法、及びコンピュータプログラム Download PDFInfo
- Publication number
- WO2023038127A1 WO2023038127A1 PCT/JP2022/033958 JP2022033958W WO2023038127A1 WO 2023038127 A1 WO2023038127 A1 WO 2023038127A1 JP 2022033958 W JP2022033958 W JP 2022033958W WO 2023038127 A1 WO2023038127 A1 WO 2023038127A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- inference
- surgical
- field image
- console
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/37—Leader-follower robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00207—Electrical control of surgical instruments with hand gesture control or hand gesture recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
- A61B2090/306—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure using optical fibres
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/371—Surgical systems with images on a monitor during operation with simultaneous use of two cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/32—Surgical robots operating autonomously
Definitions
- the present invention relates to an inference device, an information processing method, and a computer program.
- Patent Document 1 there is no description of a technique for presenting the inference result of the operative field image to the operator.
- An object of the present invention is to provide an inference device, an information processing method, and a computer program capable of inferring an operative field image obtained from a surgical robot and transmitting information based on the inference result to a console.
- An inference device is an inference device connected between a surgical robot and a console that controls the surgical robot, and acquires an operative field image captured by an imaging unit of the surgical robot.
- An information processing method acquires an operative field image captured by an imaging unit of the surgical robot by a computer connected between a surgical robot and a console that controls the surgical robot, and acquires Inference is performed on the surgical field image obtained, and processing is executed to transmit at least one of the surgical field image and information based on the inference result to the console according to the transmission setting received through the console.
- a computer program acquires a surgical field image captured by an imaging unit of the surgical robot to a computer connected between a surgical robot and a console that controls the surgical robot, and acquires the image.
- FIG. 1 is a block diagram illustrating a configuration example of a surgical robot system according to Embodiment 1;
- FIG. FIG. 4 is a schematic diagram showing an example of an operating field image;
- FIG. 4 is a schematic diagram showing a configuration example of a learning model;
- FIG. 4 is a schematic diagram showing an example of an inference image;
- FIG. 4 is a schematic diagram showing a display example on the console;
- 4 is a flowchart for explaining the procedure of processing executed in the surgical robot system according to Embodiment 1;
- FIG. 10 is a flowchart for explaining the procedure of processing executed in the surgical robot system according to Embodiment 2;
- FIG. FIG. 4 is an explanatory diagram for explaining a first specific example of a control method;
- FIG. 11 is an explanatory diagram for explaining a second specific example of the control method;
- FIG. 11 is an explanatory diagram for explaining a third specific example of a control method;
- FIG. 11 is an explanatory diagram for explaining a fourth specific example of the control method;
- FIG. 11 is an explanatory diagram for explaining a fifth specific example of a control method;
- FIG. 11 is an explanatory diagram for explaining a sixth specific example of a control method;
- FIG. 13 is a flow chart for explaining a procedure of processing executed by an inference unit according to Embodiment 3;
- FIG. 13 is a flowchart for explaining the procedure of processing executed by an inference unit according to Embodiment 4;
- FIG. 21 is a flowchart for explaining the procedure of processing executed by an inference unit according to Embodiment 5;
- FIG. 1 is a block diagram showing a configuration example of a surgical robot system 1 according to Embodiment 1.
- a surgical robot system 1 according to Embodiment 1 includes a surgical robot 10 , an inference unit 20 , a server device 30 and a console 40 .
- the surgical field is imaged by the laparoscope 15 mounted on the surgical robot 10, and the surgical field images obtained by the laparoscope 15 are displayed on the monitors 44A and 44B of the console 40.
- FIG. The operator (doctor) operates the surgical device mounted on the surgical robot 10 by moving the arm operating device 43 while confirming the surgical field images displayed on the monitors 44A and 44B, thereby performing laparoscopic surgery. .
- the present invention is not limited to laparoscopic surgery, but is applicable to robot-assisted endoscopic surgery using a thoracoscope, a gastrointestinal endoscope, a cystoscope, an arthroscope, a spinal endoscope, a neuroendoscope, an operating microscope, or the like. Generally applicable.
- the surgical robot 10 includes a control unit 11, drive units 12A-12D, arm units 13A-13D, a light source device 14, a laparoscope 15, a signal processing unit 16, and the like.
- the control unit 11 is composed of, for example, a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), and the like.
- the control unit 11 controls the operation of each hardware unit included in the surgical robot 10 based on control information and the like input from the console 40 .
- One of the arms 13A to 13D provided in the surgical robot 10 (referred to as arm 13A) is used to move the laparoscope 15 three-dimensionally. Therefore, a laparoscope 15 is attached to the distal end of the arm portion 13A.
- the drive unit 12A includes an actuator, a motor, and the like that drive the arm unit 13A. move. Note that movement control of the laparoscope 15 may be automatic control or manual control via the console 40 .
- the remaining three (arms 13B to 13D) are used to three-dimensionally move the surgical device.
- surgical devices are attached to the distal ends of the arms 13B to 13D.
- Surgical devices include forceps, energy treatment tools, vascular clips, automated anastomosis instruments, and the like.
- the drive unit 12B includes an actuator, a motor, and the like for driving the arm unit 13B. By driving the arm unit 13B under the control of the control unit 11, the surgical device attached to the distal end portion can be moved three-dimensionally.
- movement control of the surgical device is mainly manual control via the console 40, automatic control may be used as an auxiliary.
- the three arm portions 13B to 13D do not need to be controlled simultaneously, and two of the three arm portions 13B to 13D are appropriately selected and manually controlled.
- the light source device 14 includes a light source, a light guide, an illumination lens, and the like.
- the light source device 14 guides the illumination light emitted from the light source to the distal end of the light guide, and irradiates the surgical field with the illumination light through the illumination lens provided at the distal end of the light guide.
- the light emitted by the light source device 14 may be normal light or special light.
- Ordinary light is, for example, light having a wavelength band of white light (380 nm to 650 nm).
- special light is illumination light different from normal light, and corresponds to narrow band light, infrared light, excitation light, and the like.
- the laparoscope 15 includes an imaging device such as CMOS (Complementary Metal Oxide Semiconductor), a driver circuit equipped with a timing generator (TG), an analog signal processing circuit (AFE), and the like.
- the driver circuit of the laparoscope 15 takes in the RGB color signals output from the imaging device in synchronization with the clock signal output from the TG, and performs necessary processing such as noise removal, amplification, and AD conversion in the AFE. Generate digital format image data (operative field image).
- the signal processing unit 16 includes a DSP (Digital Signal Processor), an image memory, etc., and performs color separation, color interpolation, gain correction, white balance adjustment, gamma correction, etc. on the image data input from the laparoscope 15 as appropriate. process.
- the signal processing unit 16 generates moving image frame images from the processed image data, and sequentially outputs the generated frame images to the inference unit 20 .
- the frame rate of frame images is, for example, 30 FPS (Frames Per Second).
- the signal processing unit 16 may output video data conforming to a predetermined standard such as NTSC (National Television System Committee), PAL (Phase Alternating Line), DICOM (Digital Imaging and COmmunication in Medicine).
- the inference unit 20 includes a calculation unit 21, a storage unit 22, a first connection unit 23, a second connection unit 24, a third connection unit 25, and the like.
- the computing unit 21 is composed of a CPU, a ROM, a RAM, and the like.
- the ROM in the arithmetic unit 21 stores a control program and the like for controlling the operation of each hardware unit included in the inference unit 20 .
- the CPU in the arithmetic unit 21 executes a control program stored in the ROM and a computer program stored in the storage unit 22, which will be described later, and controls the operation of each hardware unit, so that the entire device functions as an inference device of the present application. make it work.
- the RAM in the calculation unit 21 temporarily stores data and the like that are used during execution of the calculation.
- the calculation unit 21 is configured to include a CPU, a ROM, and a RAM, but the configuration of the calculation unit 21 is arbitrary. Programmable Gate Array), quantum processor, volatile or non-volatile memory, etc. Further, the calculation unit 21 may have functions such as a clock that outputs date and time information, a timer that measures the elapsed time from when a measurement start instruction is given until a measurement end instruction is given, and a counter that counts the number of good.
- the storage unit 22 includes a storage device such as flash memory.
- the storage unit 22 stores a computer program executed by the calculation unit 21, various data acquired from the outside, various data generated inside the apparatus, and the like.
- the computer programs stored in the storage unit 22 include an inference processing program PG for causing the calculation unit 21 to perform inference processing on the operative field image.
- These computer programs may be a single computer program or a program group constructed by a plurality of computer programs.
- the computer program including the inference processing program PG may be distributed to a plurality of computers and executed cooperatively by the plurality of computers.
- a computer program including the inference processing program PG is provided by a non-temporary recording medium RM on which the computer program is readable.
- the recording medium RM is a portable memory such as a CD-ROM, USB memory, SD (Secure Digital) card, or the like.
- the calculation unit 21 uses a reading device (not shown) to read a desired computer program from the recording medium RM, and stores the read computer program in the storage unit 22 .
- the computer program containing the reasoning processing program PG may be provided by communication. In this case, the calculation unit 21 downloads a desired computer program through communication, and stores the downloaded computer program in the storage unit 22 .
- the storage unit 22 stores a learning model MD used for inference processing.
- a learning model MD is a learning model used to infer the position of an object to be recognized within the operative field image.
- the learning model MD is configured to output information indicating the position of the object when the surgical field image is input.
- the object to be recognized in the operative field image may be organs such as the esophagus, stomach, large intestine, pancreas, spleen, ureter, lung, prostate, uterus, gallbladder, liver, and vas deferens. , connective tissue, fat, nerves, blood vessels, muscles, and membranous structures.
- the object may be a surgical device such as forceps, energy treatment instrument, vascular clip, automatic anastomotic device, and the like.
- the learning model MD may output, as information indicating the position of the object, probability information indicating whether or not each pixel or specific region corresponds to the object.
- the storage unit 22 stores definition information of the learning model MD including learned parameters.
- a learning model MD is a learning model used to reason about a scene.
- the learning model MD is configured to output information about the scene indicated by the surgical image when the surgical image is input.
- Information about scenes output by the learning model MD includes, for example, the probability of a scene including a specific organ, the probability of a scene in which a characteristic surgical operation is performed in surgery, and a specific surgical device (blood vessel clip, automatic anastomotic device, etc.). It is information such as the probability of performing characteristic operations (ligation of blood vessels, resection of the intestinal tract, anastomosis, etc.) using .
- the first connection section 23 has a connection interface that connects the surgical robot 10 .
- the inference unit 20 receives image data of an operating field image captured by the laparoscope 15 and processed by the signal processing section 16 through the first connection section 23 . Image data input from the first connection portion 23 is output to the calculation portion 21 and the storage portion 22 .
- the second connection unit 24 has a connection interface that connects the server device 30 .
- the inference unit 20 outputs the image data of the surgical field image acquired from the surgical robot 10 and the inference result by the calculation unit 21 to the server device 30 through the second connection unit 24 .
- the third connection unit 25 has a connection interface that connects the console 40 .
- the inference unit 20 outputs the image data of the surgical field image acquired from the surgical robot 10 and the inference result by the calculation unit 21 to the console 40 through the third connection unit 25 .
- control information regarding the surgical robot 10 may be input to the inference unit 20 through the third connector.
- the control information regarding the surgical robot 10 includes information such as the positions, angles, velocities, and accelerations of the arms 13A-13D.
- the inference unit 20 may include an operation section configured with various switches and levers operated by an operator or the like. A predetermined specific function may be assigned to a switch or lever provided in the operation unit, or a function set by the operator may be assigned.
- the inference unit 20 may include a display unit that displays information to be notified to the operator or the like using characters or images, and may include an output unit that outputs information to be notified to the operator or the like by voice or sound. .
- the server device 30 includes a codec section 31, a database 32, and the like.
- the codec unit 31 has a function of encoding the image data of the surgical field input from the inference unit 20 and storing it in the database 32, a function of reading out and decoding the image data stored in the database 32, and the like.
- the database 32 stores image data encoded by the codec section 31 .
- the console 40 includes a master controller 41, an input device 42, an arm operation device 43, monitors 44A and 44B, and the like.
- the master controller 41 is composed of a CPU, ROM, RAM, etc., and controls the operation of each hardware unit provided in the console 40 .
- the input device 42 is an input device such as a keyboard, touch panel, switch, lever, etc., and receives instructions and information input from the operator.
- the input device 42 is mainly a device for operating the inference unit 20 , but may be configured so that an operation target can be selected in order to accept switching of the display function on the console 40 .
- the arm operation device 43 includes an operation tool for remotely operating the arm parts 13A-13B of the surgical robot 10.
- the operation tool includes a left-hand operation rod operated by the operator's left hand and a right-hand operation lever operated by the operator's right hand.
- the arm operating device 43 measures the movement of the operating tool using a measuring instrument such as a rotary encoder, and outputs the measured value to the master controller 41 .
- the master controller 41 generates control commands for controlling the arm units 13A to 13D of the surgical robot 10 based on the measured values input from the arm operating device 43, and transmits the generated control commands to the surgical robot 10.
- the surgical robot 10 controls the operations of the arm sections 13A to 13D based on control commands input from the console 40.
- FIG. Thereby, the arm portions 13A to 13D of the surgical robot 10 are configured to operate following the movements of the operation tools (the left-hand operation rod and the right-hand operation rod) on the console 40.
- the monitors 44A and 44B are display devices such as liquid crystal displays for displaying necessary information to the operator.
- One of the monitors 44A and 44B is used, for example, as a main monitor for displaying an operative field image, and the other is used as a sub-monitor for displaying supplementary information such as patient information.
- the laparoscope 15 is configured to output a left-eye operative field image and a right-eye operative field image
- the left-eye operative field image is displayed on the monitor 44A
- the right-eye operative field image is displayed. may be displayed on the monitor 44B to perform three-dimensional display of the operative field image.
- the inference unit 20 and the server device 30 are provided separately, but the inference unit 20 and the server device 30 may be configured as an integrated device. Additionally, the reasoning unit 20 and the server device 30 may be incorporated into the console 40 .
- FIG. 2 is a schematic diagram showing an example of an operating field image.
- the operative field image in the present embodiment is an image obtained by imaging the inside of the patient's abdominal cavity with the laparoscope 15 .
- the operative field image does not need to be a raw image output by the laparoscope 15, and may be an image (frame image) processed by the signal processing unit 16 or the like.
- the operative field imaged by the laparoscope 15 includes various tissues such as organs, blood vessels, nerves, connective tissue, lesions, membranes and layers. While grasping the relationship between these anatomical structures, the operator uses a surgical instrument such as an energy treatment tool or forceps to dissect the tissue including the lesion.
- the surgical field shown in FIG. 2 includes tissue NG including lesions such as malignant tumors, tissue ORG constituting organs, and connective tissue CT connecting these tissues.
- the tissue NG is a site to be removed from the body, and the tissue ORG is a site to remain in the body.
- the connective tissue CT is exposed by grasping the tissue NG with forceps 130B and expanding it upward in the figure.
- an operation is performed to remove a lesion such as a malignant tumor formed in the patient's body.
- the operator grasps the tissue NG including the lesion with the forceps 130B and expands it in an appropriate direction, thereby removing the connective tissue CT existing between the tissue NG including the lesion and the tissue ORG to be left.
- expose The operator cuts off the exposed connective tissue CT using the energy treatment tool 130C, thereby separating the tissue NG including the lesion from the tissue ORG to be left.
- the inference unit 20 acquires a surgical field image as shown in FIG. 2 and performs inference processing on the acquired surgical field image. Specifically, the inference unit 20 infers the position of the object to be recognized within the surgical field image. A learning model MD is used for the inference processing.
- Fig. 3 is a sample diagram showing a configuration example of the learning model MD.
- the learning model MD is a learning model for performing image segmentation, and is constructed by a neural network with convolutional layers such as SegNet.
- the learning model MD is not limited to SegNet, but is constructed using any neural network that can perform image segmentation, such as FCN (Fully Convolutional Network), U-Net (U-Shaped Network), and PSPNet (Pyramid Scene Parsing Network). good too.
- the learning model MD may be constructed using a neural network for object detection such as YOLO (You Only Look Once) or SSD (Single Shot Multi-Box Detector) instead of the neural network for image segmentation.
- YOLO You Only Look Once
- SSD Single Shot Multi-Box Detector
- a learning model MD includes, for example, an encoder EN, a decoder DE, and a softmax layer SM.
- the encoder EN consists of alternating convolutional layers and pooling layers.
- the convolution layers are multi-layered into 2 to 3 layers. In the example of FIG. 3, the convolutional layers are shown without hatching, and the pooling layers are shown with hatching.
- the convolution layer performs a convolution operation between the input data and a filter of a predetermined size (for example, 3 ⁇ 3, 5 ⁇ 5, etc.). That is, the input value input to the position corresponding to each element of the filter is multiplied by the weighting factor preset in the filter for each element, and the linear sum of the multiplied values for each element is calculated.
- the output in the convolutional layer is obtained by adding the set bias to the calculated linear sum.
- the result of the convolution operation may be transformed by an activation function.
- ReLU Rectified Linear Unit
- the output of the convolutional layer represents a feature map that extracts the features of the input data.
- the pooling layer calculates the local statistics of the feature map output from the convolution layer, which is the upper layer connected to the input side. Specifically, a window of a predetermined size (for example, 2 ⁇ 2, 3 ⁇ 3) corresponding to the position of the upper layer is set, and local statistics are calculated from the input values within the window. For example, the maximum value can be used as the statistic.
- the size of the feature map output from the pooling layer is reduced (downsampled) according to the size of the window. In the example of FIG. 3, an input image of 224 pixels ⁇ 224 pixels is converted to 112 ⁇ 112, 56 ⁇ 56, 28 ⁇ 28, . It shows that the feature map is sequentially down-sampled to a ⁇ 1 feature map.
- the output of the encoder EN (1 ⁇ 1 feature map in the example of FIG. 3) is input to the decoder DE.
- the decoder DE is constructed by alternating deconvolution layers and depooling layers.
- the deconvolution layers are multi-layered in 2 to 3 layers. In the example of FIG. 3, the deconvolution layers are shown without hatching, and the depooling layers are shown with hatching.
- the input feature map is deconvolved.
- the deconvolution operation is an operation to restore the feature map before the convolution operation under the presumption that the input feature map is the result of the convolution operation using a specific filter.
- a specific filter is represented by a matrix
- the product of the transposed matrix for this matrix and the input feature map is calculated to generate a feature map for output.
- the operation result of the deconvolution layer may be transformed by an activation function such as ReLU described above.
- the inverse pooling layers of the decoder DE are individually associated one-to-one with the pooling layers of the encoder EN, and the associated pairs have substantially the same size.
- the inverse pooling layer again enlarges (upsamples) the size of the downsampled feature map in the pooling layer of the encoder EN.
- FIG. 3 sequentially upsamples to 1 ⁇ 1, 7 ⁇ 7, 14 ⁇ 14, . indicates that
- the output of the decoder DE (224 ⁇ 224 feature map in the example of FIG. 3) is input to the softmax layer SM.
- the softmax layer SM applies the softmax function to the input values from the deconvolution layer connected to the input side, and outputs the probability of the label identifying the part at each position (pixel).
- the learning model MD may output a probability indicating whether or not each pixel corresponds to an object to be recognized from the softmax layer SM with respect to the input of the operative field image.
- the calculation unit 21 of the inference unit 20 refers to the calculation result of the learning model MD, and extracts pixels whose label probability output from the softmax layer SM is a threshold value or more (for example, 90% or more). An image (inference image) indicating the position of the object can be generated.
- FIG. 4 is a schematic diagram showing an example of an inference image.
- the example of FIG. 4 is an inference image showing the location of connective tissue.
- the connective tissue portion inferred using the learning model MD is indicated by a thick solid line, and other organ and tissue portions are indicated by broken lines for reference.
- the calculation unit 21 of the inference unit 20 generates an inference image of the connective tissue to distinguishably display the inferred connective tissue portion.
- the inference image is an image of the same size as the operative field image, and is an image in which pixels inferred as connective tissue are assigned specific colors.
- the color assigned to the pixels of the connective tissue is preferably a color that does not exist inside the human body so that it can be distinguished from organs, blood vessels, and the like.
- the color that does not exist inside the human body is, for example, a cold (blue) color such as blue or light blue.
- a cold (blue) color such as blue or light blue.
- the degree of transparency is set for each pixel constituting the inference image, and pixels recognized as connective tissue are set to be opaque, and other pixels are set to be transparent.
- the object to be recognized is not limited to connective tissue, and may be any structure such as an organ, blood (bleeding), or a surgical device.
- an object to be recognized is set in advance, and the learning model MD for the object is learned in advance and stored in the storage unit 22 .
- the inference unit 20 transmits to the console 40 at least one of the surgical field image (also referred to as the original image) acquired from the surgical robot 10 and the inference image generated from the surgical field image. Images to be transmitted are set through the input device 42 of the console 40 . That is, if transmission is set to transmit both the original image and the inference image, the inference unit 20 transmits both the original image and the inference image to the console 40, and transmits only the original image (or only the inference image). transmission setting, the inference unit 20 transmits only the original image (or only the inference image) to the console 40 .
- an inference image indicating the position of the target object in the operative field image is generated, and the generated inference image is transmitted to the console 40.
- a configuration may be employed in which position information indicating the position of the object within the field image is generated and the generated position information is transmitted to the console 40 .
- the position information indicating the position of the target object may be information specifying pixels corresponding to the target object, or may be information specifying the outline, barycenter, or the like of the area.
- FIG. 5 is a schematic diagram showing a display example on the console 40. As shown in FIG. The console 40 can superimpose the inference image on the surgical field image and display it on the monitor 44A (or monitor 44B).
- the display example of FIG. 5 shows an example in which an inference image of a connective tissue is superimposed on an original image and displayed. For the convenience of drawing, the connective tissue portion is indicated by a thick solid line. By checking the display screen, the connective tissue can be clearly identified, and the site to be excised can be grasped.
- the inference image is superimposed on the operative field image and displayed on the monitor 44A (or monitor 44B). You may display the inference image in .
- the operative field image may be displayed on one monitor 44A, and the inference image may be displayed on the other monitor 44B.
- FIG. 6 is a flow chart for explaining the procedure of processing executed in the surgical robot system 1 according to the first embodiment.
- the console 40 accepts transmission settings for the surgical field image and the inference image through the input device 42 (step S101).
- the transmitter setting accepts a setting as to whether to transmit only the operative field image, only the inference image, or to transmit both the operative field image and the inference image.
- the console 40 notifies the inference unit 20 of the received transmission setting (step S102).
- FIG. 7 is a flowchart for explaining the procedure of processing executed in the surgical robot system 1 according to Embodiment 2.
- the surgical robot system 1 performs inference processing for the surgical field image, transmission processing of the image from the inference unit 20 to the console 40, display processing on the monitors 44A and 44B, and the like, in the same procedure as in the first embodiment.
- the calculation unit 21 of the inference unit 20 After performing the inference process, the calculation unit 21 of the inference unit 20 generates control information for controlling the operation of the surgical robot 10 according to the inference result (step S121), and transmits the generated control information to the console 40. It transmits (step S122).
- the calculation unit 21 calculates the area of the object in time series based on the inference result of the learning model MD, and performs the following operations so that the laparoscope 15 moves following the portion where the calculated area increases or decreases.
- a control amount of the arm portion 13A that holds the laparoscope 15 may be calculated.
- the object may be a lesion or connective tissue to be excised, blood (bleeding), or the like.
- the calculation unit 21 calculates the distance (mainly the distance in the depth direction) between the laparoscope 15 and the object based on the inference result of the learning model MD, and the arm holding the laparoscope 15 is calculated according to the calculated distance. You may calculate the control amount of the part 13A. Specifically, the calculation section 21 may calculate the distance of the arm section 13A so that the calculated distance becomes a preset distance.
- control unit 21 may calculate the control amount of the arm unit 13A holding the laparoscope 15 so as to follow the region where the confidence of the inference result is relatively high.
- FIG. 8 is an explanatory diagram for explaining a first specific example of the control method.
- the area that the operator wants to see in the operative field image is the intersection of the extension line of the surgical device operated with the dominant hand (e.g. right hand) and the extension line of the surgical device operated with the non-dominant hand (e.g. left hand). are often nearby. Therefore, based on the inference result of the learning model MD, the calculation unit 21 recognizes the surgical device operated by the dominant hand and the surgical device operated by the non-dominant hand, and derives the extension line of each surgical device. and find their intersection point. In the example of FIG. 8, the extension of each surgical device is indicated by dashed lines and the intersection point is indicated by P1.
- FIG. 10 is an explanatory diagram for explaining a third specific example of the control method.
- the target of treatment with a surgical device is a blood vessel
- an appropriate surgical device is selected according to the area (thickness) of the blood vessel, the shape of the blood vessel, and the like. Therefore, the calculation unit 21 recognizes the blood vessel appearing in the surgical field image based on the inference result of the learning model MD, and obtains the area of the blood vessel from the recognition result. At this time, the calculation unit 21 may standardize the area of the blood vessel using the size (area) of the surgical device (the forceps 130B and the energy treatment instrument 130C) shown in the surgical field image as a reference.
- an icon may be displayed, or the operator may be notified by voice.
- a target surgical device such as an ultrasonic coagulation and incision device
- FIG. 11 is an explanatory diagram for explaining a fourth specific example of the control method.
- the surgical device being operated by the operator's dominant hand and non-dominant hand is both a grasping forceps and a tissue is to be expanded, it is preferable to zoom out to a range where the grasping forceps can be fully captured. Therefore, based on the inference result of the learning model MD, the calculation unit 21 recognizes the surgical devices operated by the superior hand and the non-dominant hand of the operator, and determines whether or not both surgical devices are grasping forceps. do.
- the calculating unit 21 calculates the control amount of the arm unit 13A holding the laparoscope 15 so as to zoom out to a range in which the grasping forceps are sufficiently captured. Control information based on the result is sent to the console 40 . Alternatively, the calculation unit 21 may calculate depth information and calculate the control amount of the arm unit 13A holding the laparoscope 15 based on the calculated depth information.
- FIG. 11 shows a scene in which adipose tissue is grasped using two grasping forceps 130B and 130D, and shows a zoomed out state according to recognition by these two grasping forceps 130B and 130D.
- FIG. 12 is an explanatory diagram for explaining a fifth specific example of the control method.
- the calculation unit 21 recognizes the surgical device operated by the superior hand of the operator based on the inference result of the learning model MD, and determines whether or not the recognized surgical device is the cutting device.
- the computing unit 21 holds the laparoscope 15 so that the tip of the cutting device is zoomed in sufficiently.
- the amount of control of the arm portion 13A is calculated, and control information based on the calculation result is transmitted to the console 40.
- the calculation unit 21 may calculate the area of the distal end portion of the detachment device, and calculate the control amount of the arm unit 13A so as to zoom in so that the calculated area is greater than or equal to the set value.
- FIG. 12 shows a state in which the distal end portion of the energy treatment instrument 130C is zoomed in as a result of recognizing that the surgical device operated by the operator's dominant hand is the energy treatment instrument 130C (cutting device). .
- the console 40 may automatically control the operation of the arm section 13A so as to follow the tip of the detachment device after zooming in.
- the console 40 may control the movement of the arm 13A to follow the tip of the cutting device and zoom in when the tip of the cutting device comes to rest.
- the arm 13A holding the laparoscope 15 is moved to zoom in.
- the zoom mechanism of the laparoscope 15 can be controlled. It is good also as a structure which zooms in by.
- FIG. 13 is an explanatory diagram for explaining a sixth specific example of the control method.
- control to move the laparoscope 15 or control to put gauze may be performed.
- the calculation unit 21 recognizes the bleeding region and calculates the area of the recognized bleeding region based on the inference result of the learning model MD.
- the bleeding area is indicated by P6.
- the area of the bleeding region P6 corresponds to the amount of bleeding.
- the calculation unit 21 determines whether or not the calculated area of the bleeding region P6 is equal to or greater than a preset threshold.
- a control amount of the arm portion 13A holding the laparoscope 15 is calculated so as to become a point (for example, the center of gravity of the bleeding region P6), and control information based on the calculation result is transmitted to the console 40.
- the computing unit 21 may transmit control information to the console 40 for executing the control of placing gauze on the bleeding region P6. good. Specifically, when gripping forceps gripping gauze are attached to the arm portion 13D, the computing portion 21 generates control information for controlling the operation of the arm portion 13D and transmits it to the console 40. good. Also, the console 40 may display text information indicating that gauze should be placed on the bleeding area P6 on the monitor 44A.
- the computing unit 21 may perform control to move the laparoscope 15 when the amount of bleeding is relatively small, and may perform control to place gauze when the amount of bleeding is relatively large. For example, a first threshold value and a second threshold value (where the first threshold value is less than the second threshold value) are set for the area of the bleeding region, and the laparoscope 15 is moved when the area of the bleeding region P6 is equal to or greater than the first threshold value. Control: When the area of the bleeding region P6 becomes equal to or greater than the second threshold value, gauze may be placed.
- the console 40 may be configured to start the above control when receiving a trigger operation by the operator.
- the trigger operation is the same as in the first specific example. That is, a predetermined gesture operation by the surgical device, a predetermined input operation by the input device 42, a predetermined voice input, or the like can be used as the trigger operation.
- Embodiment 3 will describe a configuration for changing the resolution of the surgical field image according to the degree of certainty of the inference result.
- FIG. 14 is a flowchart for explaining the procedure of processing executed by the inference unit 20 according to the third embodiment.
- the surgical robot system 1 performs inference processing on the surgical field image in the same procedure as in the first embodiment.
- the calculation unit 21 of the inference unit 20 calculates the certainty factor of the inference result (step S301).
- the certainty of the inference result is calculated based on the probability output from the softmax layer SM of the learning model MD.
- the calculation unit 21 can calculate the certainty by averaging the probability values for each pixel estimated to be the object.
- the calculation unit 21 changes the resolution of the operative field image according to the calculated certainty (step S302).
- the resolution of the operative field image may be changed according to the set resolution Y.
- the computing unit 21 may change to a preset resolution when the certainty is lower than the threshold.
- the calculation unit 21 transmits the surgical field image and the inference image with the changed resolution to the server device 30, and registers them in the database 32 (step S303).
- Embodiment 3 it is possible to change the resolution of an operating field image that has a low degree of certainty and that is difficult to determine whether it is an object or not, thereby saving storage capacity.
- Embodiment 4 a configuration for calculating the score of surgery by the surgical robot based on the certainty of the inference result and the information of the surgical robot 10 will be described.
- the calculation unit 21 of the inference unit 20 calculates the certainty factor of the inference result (step S401).
- the certainty of the inference result is calculated based on the probability output from the softmax layer SM of the learning model MD.
- the calculation unit 21 can calculate the certainty by averaging the probability values for each pixel estimated to be the object.
- the calculation unit 21 acquires information on the surgical robot 10 from the console 40 (step S402).
- the calculation unit 21 may acquire information such as the positions, angles, velocities, and angular velocities of the arms 13A to 13D.
- the calculation unit 21 calculates the surgical score based on the confidence factor calculated in step S401 and the information of the surgical robot 10 acquired in step S402 (step S403).
- a function or a learning model configured to output a surgical score in accordance with the input of the certainty and the information of the surgical robot 10 is prepared in advance, and the certainty and the information of the surgical robot 10 are stored in the function or the learning model. to calculate the score.
- the calculation unit 21 uses a function prepared in advance based on information such as the degree of certainty of the anatomical structure (object), area, increase/decrease in area, operation information of the surgical device, recognition result of the surgical device (trajectory, etc.), and the like. or a learning model may be used to calculate the score.
- the calculation unit 21 may determine the next operation of the surgical robot 10 or present the next operation to the operator based on the calculated score.
- Embodiment 5 The surgical robot system 1 according to Embodiment 5 generates left-eye and right-eye surgical field images in the laparoscope 15, and generates the generated left-eye and right-eye surgical field images via the inference unit 20, It is assumed that the system performs three-dimensional display by outputting to monitors 44A and 44B.
- the calculation unit 21 of the inference unit 20 performs inference processing on each of the left-eye surgical field image and the right-eye surgical field image.
- the inference procedure is the same as in the first embodiment.
- FIG. 16 is a flowchart for explaining the procedure of processing executed by the inference unit 20 according to the fifth embodiment.
- the calculation unit 21 of the inference unit 20 performs inference processing on each of the left-eye surgical field image and the right-eye surgical field image (step S501).
- the inference procedure is the same as in the first embodiment.
- the calculation unit 21 calculates the certainty factor for each inference result (step S502).
- a certainty factor calculation method is the same as in the third embodiment.
- the calculation unit 21 compares the confidence obtained from the left-eye operating field image and the confidence obtained from the right-eye operating field image, and determines whether or not the confidence is different (step S503).
- the calculation unit 21 determines that the certainty factors are different when the certainty factors differ by a predetermined ratio (for example, 10%) or more.
- step S503 If it is determined that the degrees of certainty are different (step S503: YES), there is a possibility that the laparoscope 15 is tilted with respect to the object, so the computing unit 21 outputs an alarm (step S504). Specifically, the computing unit 21 transmits character information indicating that the laparoscope 15 is tilted to the console 40, and displays it on the monitors 44A and 44B.
- an alarm is output.
- Information may be sent to console 40 .
- the calculation unit 21 may calculate depth information based on the left and right parallax of the operative field image and transmit the calculated depth information to the console 40 .
- the depth information of the designated position may be calculated.
- depth information of specified positions (center of gravity, four corners, arbitrary points on contour, set position group, etc.) on the object may be calculated.
- the calculation unit 21 may generate control information for controlling the operation of the laparoscope 15 based on the calculated depth information, and transmit the generated control information to the console 40 .
- the calculation unit 21 may generate control information for automatically zooming the laparoscope 15 and transmit the generated control information to the console 40 .
- the computing unit 21 determines the arrival point and the arrival path of the surgical device based on the depth information, automatically moves the arm units 13A to 13D to the vicinity of the resection target, and the surgical device approaches the resection target. In this case, control may be performed to display the information "Please excise" on the monitors 44A and 44B.
- an alert may be output when an attempt is made to excise a portion that should not be excised or when a sign of danger such as bleeding is detected.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Robotics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Endoscopes (AREA)
Abstract
Description
(実施の形態1)
図1は実施の形態1に係る手術ロボットシステム1の構成例を示すブロック図である。実施の形態1に係る手術ロボットシステム1は、手術ロボット10、推論ユニット20、サーバ装置30、及びコンソール40を備える。手術ロボットシステム1では、手術ロボット10に搭載される腹腔鏡15によって術野を撮像し、腹腔鏡15より得られる術野画像をコンソール40のモニタ44A,44Bに表示させる。術者(医師)は、モニタ44A,44Bに表示される術野画像を確認しながら、アーム操作デバイス43を動かすことによって、手術ロボット10に搭載される手術デバイスを操作し、腹腔鏡手術を行う。
図2は術野画像の一例を示す模式図である。本実施の形態における術野画像は、患者の腹腔内を腹腔鏡15により撮像して得られる画像である。術野画像は、腹腔鏡15が出力する生の画像である必要はなく、信号処理部16などによって処理が施された画像(フレーム画像)であればよい。
図6は実施の形態1に係る手術ロボットシステム1において実行する処理の手順を説明するフローチャートである。コンソール40は、入力デバイス42を通じて、術野画像及び推論画像の送信設定を受付ける(ステップS101)。送信手設定では、術野画像のみを送信するのか、推論画像のみを送信するのか、術野画像及び推論画像の双方を送信するのかについての設定を受付ける。コンソール40は、受付けた送信設定を推論ユニット20に通知する(ステップS102)。
実施の形態2では、推論ユニット20において手術ロボット10の制御情報を生成し、コンソール40を通じて手術ロボットを制御する構成について説明する。
実施の形態3では、推論結果の確信度に応じて、術野画像の解像度を変更する構成について説明する。
実施の形態4では、推論結果の確信度と、手術ロボット10の情報とに基づき、手術ロボットによる手術のスコアを算出する構成について説明する。
実施の形態5における手術ロボットシステム1は、腹腔鏡15において左眼用及び右眼用の術野画像を生成し、生成した左眼用及び右眼用の術野画像を推論ユニット20経由で、モニタ44A,44Bに出力することにより、3次元表示を行うシステムであるとする。
11 制御部
12A~12D 駆動部
13A~13D アーム部
14 光源装置
15 腹腔鏡
16 信号処理部
20 推論ユニット
21 演算部
22 記憶部
23 第1接続部
24 第2接続部
25 第3接続部
30 サーバ装置
31 コーデック部
32 データベース
40 コンソール
41 マスターコントローラ
42 入力デバイス
43 アーム操作デバイス
44A,44B モニタ
Claims (20)
- 手術ロボットと、該手術ロボットを制御するコンソールとの間に接続される推論装置であって、
前記手術ロボットの撮像部により撮像された術野画像を取得する画像取得部と、
取得した術野画像について推論処理を行う推論部と、
前記コンソールによる送信設定に応じて、前記画像取得部より取得した術野画像、及び、前記推論部による推論結果に基づく情報の少なくとも一方を、前記コンソールへ送信する送信部と
を備える推論装置。 - 前記推論部は、前記術野画像内で認識すべき対象物の位置、及び、前記術野画像内で発生している事象の少なくとも一方を推論する
請求項1に記載の推論装置。 - 前記推論部は、前記対象物の位置を示す画像データを生成し、
前記送信部は、前記コンソールへの一方向通信により、前記画像データを送信する
請求項2に記載の推論装置。 - 前記推論部は、前記対象物の位置を示す位置情報を生成し、
前記送信部は、前記コンソールとの双方向通信により、前記位置情報を送信する
請求項3に記載の推論装置。 - 前記推論部による推論結果に応じて、前記手術ロボットの動作を制御するための制御情報を生成する制御部を備え、
前記送信部は、前記制御部により生成された制御情報を前記コンソールへ送信する
請求項2から請求項4の何れか1つに記載の推論装置。 - 前記対象物は、手術デバイスであり、
前記制御部は、認識した手術デバイスの先端部分に追従して前記撮像部を移動させるべく、前記制御情報を生成する
請求項5に記載の推論装置。 - 前記制御部は、認識した対象物の面積を算出し、算出した面積が増減している部分に追従して前記撮像部を移動させるべく、前記制御情報を生成する
請求項5に記載の推論装置。 - 前記制御部は、前記対象物上の指定された位置へ前記撮像部を移動させるべく、前記制御情報を生成する
請求項5に記載の推論装置。 - 前記制御部は、前記撮像部と前記対象物との間の距離に応じて前記撮像部を移動させるべく、前記制御情報を生成する
請求項5に記載の推論装置。 - 前記推論部は、前記コンソールを操作する術者の動作又はジェスチャを推論し、
前記制御部は、前記推論部による推論結果に応じて、前記撮像部又は手術デバイスの動作を制御すべく、制御情報を生成する
請求項5に記載の推論装置。 - 前記制御部は、認識した対象物の面積又は形状を求め、前記対象物の面積又は形状に応じて、使用すべき手術デバイスを選択又は制御するための制御情報を生成する
請求項5に記載の推論装置。 - 前記推論部による推論結果の確信度を算出する算出部と、
前記算出部にて算出した確信度に応じて、前記術野画像の解像度を変更する解像度変更部と
を備える請求項1から請求項4の何れか1つに記載の推論装置。 - 前記推論部による推論結果の確信度を算出する算出部と、
前記手術ロボットの情報を前記コンソールから取得する情報取得部と
を備え、
前記算出部は、算出した確信度と、前記情報取得部が取得した情報とに基づき、前記手術ロボットによる手術のスコアを算出する
を備える請求項1から請求項4の何れか1つに記載の推論装置。 - 前記手術ロボットの撮像部は、左眼用の術野画像及び右眼用の術野画像を出力するよう構成されており、
前記画像取得部は、前記撮像部より出力される左眼用の術野画像及び右眼用の術野画像を取得し、
前記推論部は、取得した左眼用の術野画像及び右眼用の術野画像の夫々について個別に推論する
請求項1から請求項4の何れか1つに記載の推論装置。 - 前記推論部による推論結果の確信度を算出する算出部と、
前記左眼用の術野画像について算出した推論結果の確信度と、前記右眼用の術野画像について算出した推論結果の確信度との差に基づき、警報を出力する出力部と
を備える請求項14に記載の推論装置。 - 前記推論部による推論結果の確信度を算出する算出部と、
前記左眼用の術野画像について算出した推論結果の確信度と、前記右眼用の術野画像について算出した推論結果の確信度との差に応じて、前記撮像部を移動させるための制御情報を生成する制御部と
を備え、
前記送信部は、前記制御部により生成された制御情報を前記コンソールへ送信する
請求項14に記載の推論装置。 - 前記左眼用の術野画像及び前記右眼用の術野画像に基づき、奥行情報を算出する算出部を備え、
前記送信部は、算出した奥行情報を前記コンソールへ送信する
請求項14に記載の推論装置。 - 前記左眼用の術野画像及び前記右眼用の術野画像に基づき、奥行情報を算出する算出部と、
算出した奥行情報に応じて、前記手術ロボットの動作を制御するための制御情報を生成する制御部と
を備え、
前記送信部は、前記制御部が生成した制御情報を前記コンソールへ送信する
請求項14に記載の推論装置。 - 手術ロボットと、該手術ロボットを制御するコンソールとの間に接続されるコンピュータにより、
前記手術ロボットの撮像部により撮像された術野画像を取得し、
取得した術野画像について推論を行い、
前記コンソールを通じて受付けた送信設定に応じて、前記術野画像及び推論結果に基づく情報の少なくとも一方を前記コンソールへ送信する
処理を実行する情報処理方法。 - 手術ロボットと、該手術ロボットを制御するコンソールとの間に接続されるコンピュータに、
前記手術ロボットの撮像部により撮像された術野画像を取得し、
取得した術野画像について推論を行い、
前記コンソールを通じて受付けた送信設定に応じて、前記術野画像及び推論結果に基づく情報の少なくとも一方を前記コンソールへ送信する
処理を実行させるためのコンピュータプログラム。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202280060465.9A CN117915853A (zh) | 2021-09-10 | 2022-09-09 | 推断装置、信息处理方法和计算机程序 |
| JP2023547013A JP7461689B2 (ja) | 2021-09-10 | 2022-09-09 | 推論装置、情報処理方法、及びコンピュータプログラム |
| US18/689,636 US20240390089A1 (en) | 2021-09-10 | 2022-09-09 | Inference Device, Information Processing Method, and Recording Medium |
| EP22867448.7A EP4413941A1 (en) | 2021-09-10 | 2022-09-09 | Inference device, information processing method, and computer program |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202163242628P | 2021-09-10 | 2021-09-10 | |
| US63/242,628 | 2021-09-10 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023038127A1 true WO2023038127A1 (ja) | 2023-03-16 |
Family
ID=85506463
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2022/033958 Ceased WO2023038127A1 (ja) | 2021-09-10 | 2022-09-09 | 推論装置、情報処理方法、及びコンピュータプログラム |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20240390089A1 (ja) |
| EP (1) | EP4413941A1 (ja) |
| JP (1) | JP7461689B2 (ja) |
| CN (1) | CN117915853A (ja) |
| WO (1) | WO2023038127A1 (ja) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI805248B (zh) * | 2022-03-01 | 2023-06-11 | 艾沙技術股份有限公司 | 基於頭部追蹤控制內視鏡手術機器人的控制系統與控制方法 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2014038075A (ja) | 2012-08-20 | 2014-02-27 | Tokyo Institute Of Technology | 外力推定装置及び鉗子システム |
| WO2014155815A1 (ja) * | 2013-03-29 | 2014-10-02 | オリンパスメディカルシステムズ株式会社 | 立体内視鏡システム |
| WO2019239244A1 (en) * | 2018-06-14 | 2019-12-19 | Sony Corporation | Dominant tool detection system for surgical videos |
| JP2020062372A (ja) * | 2018-05-09 | 2020-04-23 | オリンパス ビンテル ウント イーベーエー ゲーエムベーハーOlympus Winter & Ibe Gesellschaft Mit Beschrankter Haftung | 医療システムの作動方法及び外科手術を行うための医療システム |
| JP2021509305A (ja) * | 2017-12-28 | 2021-03-25 | エシコン エルエルシーEthicon LLC | ロボット支援外科用プラットフォームのためのディスプレイ装置 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160331584A1 (en) | 2015-05-14 | 2016-11-17 | Novartis Ag | Surgical tool tracking to control surgical system |
| US20190361592A1 (en) | 2018-05-23 | 2019-11-28 | Alcon Inc. | System and method of utilizing surgical tooling equipment with graphical user interfaces |
-
2022
- 2022-09-09 EP EP22867448.7A patent/EP4413941A1/en active Pending
- 2022-09-09 US US18/689,636 patent/US20240390089A1/en active Pending
- 2022-09-09 WO PCT/JP2022/033958 patent/WO2023038127A1/ja not_active Ceased
- 2022-09-09 CN CN202280060465.9A patent/CN117915853A/zh active Pending
- 2022-09-09 JP JP2023547013A patent/JP7461689B2/ja active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2014038075A (ja) | 2012-08-20 | 2014-02-27 | Tokyo Institute Of Technology | 外力推定装置及び鉗子システム |
| WO2014155815A1 (ja) * | 2013-03-29 | 2014-10-02 | オリンパスメディカルシステムズ株式会社 | 立体内視鏡システム |
| JP2021509305A (ja) * | 2017-12-28 | 2021-03-25 | エシコン エルエルシーEthicon LLC | ロボット支援外科用プラットフォームのためのディスプレイ装置 |
| JP2020062372A (ja) * | 2018-05-09 | 2020-04-23 | オリンパス ビンテル ウント イーベーエー ゲーエムベーハーOlympus Winter & Ibe Gesellschaft Mit Beschrankter Haftung | 医療システムの作動方法及び外科手術を行うための医療システム |
| WO2019239244A1 (en) * | 2018-06-14 | 2019-12-19 | Sony Corporation | Dominant tool detection system for surgical videos |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4413941A1 (en) | 2024-08-14 |
| US20240390089A1 (en) | 2024-11-28 |
| CN117915853A (zh) | 2024-04-19 |
| JPWO2023038127A1 (ja) | 2023-03-16 |
| JP7461689B2 (ja) | 2024-04-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20250176797A1 (en) | Augmented medical vision systems and methods | |
| US20250090241A1 (en) | Systems and methods for tracking a position of a robotically-manipulated surgical instrument | |
| US20250054147A1 (en) | Composite medical imaging systems and methods | |
| JP7480477B2 (ja) | 医療用観察システム、制御装置及び制御方法 | |
| JP2022036255A (ja) | ロボット外科手術装置および視聴者適合型の立体視ディスプレイの態様を制御するためのシステム、方法、およびコンピュータ可読記憶媒体 | |
| CN102821671B (zh) | 内窥镜观察支持系统和设备 | |
| KR101038417B1 (ko) | 수술 로봇 시스템 및 그 제어 방법 | |
| JP2020156800A (ja) | 医療用アームシステム、制御装置、及び制御方法 | |
| JP2022514635A (ja) | デュアル画像センサを有する内視鏡 | |
| US20250117073A1 (en) | Systems and methods for facilitating optimization of an imaging device viewpoint during an operating session of a computer-assisted operation system | |
| KR20080089376A (ko) | 3차원 텔레스트레이션을 제공하는 의료용 로봇 시스템 | |
| JPWO2017145475A1 (ja) | 医療用情報処理装置、情報処理方法、医療用情報処理システム | |
| WO2022158451A1 (ja) | コンピュータプログラム、学習モデルの生成方法、及び支援装置 | |
| US12471994B2 (en) | Systems and methods for facilitating insertion of a surgical instrument into a surgical space | |
| JP2023509321A (ja) | 内視鏡処置のためのガイド下解剖学的構造操作 | |
| JP2023507063A (ja) | 手術中に画像取込装置を制御するための方法、装置、およびシステム | |
| US20250090231A1 (en) | Anatomical structure visualization systems and methods | |
| JP2022045236A (ja) | 医療用撮像装置、学習モデル生成方法および学習モデル生成プログラム | |
| WO2023237105A1 (zh) | 用于在医生控制台显示虚拟手术器械的方法及医生控制台 | |
| JP7311936B1 (ja) | コンピュータプログラム、学習モデルの生成方法、及び情報処理装置 | |
| JP7461689B2 (ja) | 推論装置、情報処理方法、及びコンピュータプログラム | |
| WO2025019594A1 (en) | Systems and methods for implementing a zoom feature associated with an imaging device in an imaging space | |
| JP2025174992A (ja) | ロボット操作手術器具の位置を追跡システムおよび方法 | |
| WO2024182294A1 (en) | Systems and methods for calibrating an image sensor in relation to a robotic instrument |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22867448 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023547013 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 18689636 Country of ref document: US Ref document number: 202280060465.9 Country of ref document: CN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2022867448 Country of ref document: EP Effective date: 20240410 |