EP4577121A1 - Système robotique chirurgical et procédé de fusion peropératoire de différentes modalités d'imagerie - Google Patents
Système robotique chirurgical et procédé de fusion peropératoire de différentes modalités d'imagerieInfo
- Publication number
- EP4577121A1 EP4577121A1 EP23764720.1A EP23764720A EP4577121A1 EP 4577121 A1 EP4577121 A1 EP 4577121A1 EP 23764720 A EP23764720 A EP 23764720A EP 4577121 A1 EP4577121 A1 EP 4577121A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- ultrasound
- laparoscopic
- tissue
- probe
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0833—Clinical applications involving detecting or locating foreign bodies or organic structures
- A61B8/085—Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4245—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
- A61B8/4263—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors not mounted on the probe, e.g. mounted on an external reference frame
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/483—Diagnostic techniques involving the acquisition of a 3D volume of data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5238—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
- A61B8/5246—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5238—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
- A61B8/5246—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode
- A61B8/5253—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode combining overlapping images, e.g. spatial compounding
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5269—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
- A61B8/5276—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts due to motion
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2059—Mechanical position encoders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
- A61B2090/3782—Surgical systems with images on a monitor during operation using ultrasound transmitter or receiver in catheter or minimal invasive instrument
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/37—Leader-follower robots
Definitions
- Surgical robotic systems are currently being used in a variety of surgical procedures, including minimally invasive medical procedures.
- Some surgical robotic systems include a surgeon console controlling a surgical robotic arm and a surgical instrument having an end effector (e.g., forceps or grasping instrument) coupled to and actuated by the robotic arm.
- the robotic arm In operation, the robotic arm is moved to a position over a patient and then guides the surgical instrument into a small incision via a surgical port or a natural orifice of a patient to position the end effector at a work site within the patient’s body.
- the ultrasound images may be obtained using an ultrasound probe that is localized without using physical fiducial markers, i.e., using vision-only approach.
- the vision-based approach may utilize a deep learning model to estimate degrees of freedom (DoF) pose of a rigid object (i.e., ultrasound probe) from stereo or monocular laparoscopic camera images.
- the probe may have any number of DoF, which may be 6 DoF.
- Realistic training data for probe localization for the deep learning model may be provided by a custom synthetic data generation pipeline.
- a synthetic 3D anatomically accurate surgical site may be developed based on real data from surgical procedures.
- the ultrasound probe may be rendered on surgical site using the 3D virtual (e.g., computer aided drafting) model of the probe and stereo laparoscopic camera geometry from camera calibration.
- the data may include a plurality of synthetic images, e.g., around 100,000, to develop the deep learning network to estimate 6 DoF pose of ultrasound probe directly from images without modifying the probe in any way, i.e., no physical fiducial markers on the probe.
- This deep learning model can be trained on each image of a pair of stereoscopic images separately, rather than in pairs.
- the laparoscopic or robotic ultrasound probe may be localized in the field of view of the laparoscopic monocular or stereo camera through a vision-only approach without modifying the laparoscopic or robotic ultrasound probe to include any physical fiducial markers.
- the image processing device may be further configured to localize the laparoscopic ultrasound probe in the video stream based on the key points or virtual fiducial markers.
- the image processing device may be further configured to estimate multiple DoF (e.g., 6) pose and orientation of the laparoscopic or robotic ultrasound probe from the video stream based on the key points or fiducial markers.
- the image processing device may be further configured to estimate the articulated pose and orientation of the grasper instrument holding the ultrasound probe.
- the image processing device may be configured to estimate the pose and orientation of the ultrasound probe by combining the pose and orientation of the probe as well as the pose and orientation of the grasper holding the probe.
- the image processing device may be further configured to generate dense depth map of surgical site from laparoscopic monocular or stereo camera to estimate 3D location of instruments, probe, and anatomy in the laparoscopic camera frame of reference.
- the image processing device may also be configured to implement a Deformable Visual Simultaneous Localization and Mapping (DV-SLAM) pipeline to localize the laparoscopic camera in 3D space at every acquired image frame over time with respect to a World Coordinate System (WCS).
- the WCS may be tied to the trocar from which laparoscopic endoscope camera is inserted, in which case, the location of trocar on patient anatomy is estimated from multiple external cameras mounted on robot carts or towers.
- the WCS may be also tied to one of the instruments, e.g., the grasper instrument manipulating the laparoscopic ultrasound probe, in which case, the location of trocar on patient anatomy is estimated from segmenting the shaft of the grasper instrument in plurality of images captured from laparoscopic camera and computing the intersection between lines fit to the shafts, hence localizing the remote center of motion (RCM) of the grasper instrument trocar.
- the image processing system may be configured to dispose the location and orientation of ultrasound probe in each laparoscopic camera image with respect to the WCS, hence transferring all images with respect to a fixed frame of reference.
- the image processing device may be also configured to generate the ultrasound volume by computing the value of each ultrasound voxel by interpolating between the values of ultrasound image slice pixels that overlap the corresponding voxels after placing each ultrasound image in the world coordinate system.
- Depth mapping can be performed for either monocular or stereo cameras. Stereo camera depth estimation is easier and more reliable than monocular camera depth estimation.
- DV-SLAM can be performed using either monocular camera input or stereo camera input. DV-SLAM with monocular camera input can’t resolve the scale factor (how far away the camera is from scene) reliably because there are multiple solutions along the same line of 3D point.
- Deformable Visual SLAM in combination with depth map from stereo reconstruction provides the most reliable method of localizing the camera with respect to WCS.
- the default mode of operation may include: 1) depth estimation through stereo reconstruction using calibrated stereo endoscope or through monocular depth estimation using monocular laparoscope, i.e., depth mapping; and 2) DV-SLAM at every frame with stereo pair from calibrated stereo camera pair images.
- the image processing device may also be configured to enhance the ultrasound volume by matching a plurality of key points in each 2D ultrasound image of the plurality of ultrasound images.
- the image processing device may be additionally configured to generate a 3D model based on the volumetric image of tissue formed from the first modality images and deform the 3D model to conform to the ultrasound volume.
- the 3D model may be deformed as follows: segmenting of the 2D/3D anatomical target surface and tracking of the instrument to isolate motion of instrument from anatomy; compensation for motion of the patient, e.g., breathing motion compensation, by tracking target anatomy and estimating movement of anatomy while masking out the movement of instruments; and biomechanical modeling to estimate the physically-realistic movement of the organ of interest along with the anatomy around the tissue being tracked.
- the image processing device may be configured to segment all instruments at the surgical site in order to mask out non-anatomical regions of interest from organ surface deformation estimation.
- the image processing device may further be configured to perform instance segmentation mask of the organ in laparoscopic camera images for every frame to estimation breathing motion as well surface deformation.
- Implementations of the above embodiment may further include one or more of the following features.
- the laparoscopic or robotic ultrasound probe may include plurality of physical fiducial markers on the probe in order to robustly estimate its pose and orientation from laparoscopic camera images. This may be used to identify probes lacking any discernable visual features on the outside for pose and orientation estimation.
- the laparoscopic or robotic ultrasound probe may be devoid of any physical fiducial markers obviating the need for any modification of the ultrasound probe.
- the image processing device may be further configured to localize the laparoscopic ultrasound probe in the video stream based on the plurality of key points or virtual fiducial markers.
- the image processing device may be also configured to localize the laparoscopic ultrasound probe based on kinematic data of the first robotic arm.
- the image processing device may be additionally configured to estimate a pose and orientation of the laparoscopic ultrasound probe from the video stream based on the key points or virtual fiducial markers.
- the pose and orientation estimation of the laparoscopic ultrasound probe may be accomplished through a combination of kinematics data of the first robotic arm as well as the localization of the plurality of key points or virtual fiducial markers from the laparoscopic camera video stream.
- the image processing device may be further configured to generate the ultrasound volume by computing the value of each ultrasound voxel by interpolating between the values of ultrasound image slice pixels that overlap the corresponding voxels after placing each ultrasound image in the world coordinate system.
- the image processing device may also be configured to enhance the ultrasound volume by matching a plurality of key points in each 2D ultrasound image of the plurality of ultrasound images.
- the image processing device may be also further configured to transfer a slice of the 3D model to a corresponding 2D ultrasound image of the ultrasound volume using a neural network.
- a method for intraoperative imaging of tissue includes generating a 3D model of tissue from a plurality of preoperative images, generating and updating a depth-based surface map of tissue using monocular or stereo laparoscopic camera, and generating an ultrasound volume from a plurality of 2D ultrasound images obtained from a laparoscopic ultrasonic probe.
- the method further includes visual localization and mapping pipeline that places the laparoscopic camera in a world coordinate system from every image of the camera.
- the method further includes ultrasound probe pose and orientation estimation from monocular or stereo laparoscopic camera in the world coordinate system to generate an ultrasound volume from plurality of registered ultrasound image slices.
- the method further includes registering the ultrasound volume with the 3D model and generating an overlay of the 3D model and a 2D ultrasound image of the plurality of 2D ultrasound images.
- the method additionally includes displaying a video stream obtained from a laparoscopic video camera and the overlay.
- the video stream includes the laparoscopic ultrasound probe, and the overlay includes the 3D model and the 2D ultrasound image extending along an imaging plane of the laparoscopic ultrasound probe.
- Implementations of the above embodiment may additionally include one or more of the following features.
- the method may further include localizing the laparoscopic ultrasound probe in the video stream based on a plurality of key points or virtual fiducial markers disposed on the laparoscopic ultrasound probe.
- the method may further include moving the laparoscopic ultrasound probe by a robotic arm and localizing the laparoscopic ultrasound probe based on kinematic data of the robotic arm.
- the method may additionally include estimating a pose and orientation of the laparoscopic ultrasound probe from the video stream based on the combination of key points or virtual fiducial markers, robotic arm kinematic data, stereo reconstruction, and visual simultaneous localization and mapping of laparoscopic camera with respect to a world coordinate system.
- the method may further include generating the ultrasound volume by computing the value of each ultrasound voxel by interpolating between the values of ultrasound image slice pixels that overlap the corresponding voxels after placing each ultrasound image in the world coordinate system.
- the image processing device may also be configured to enhance the ultrasound volume by matching a plurality of key points in each 2D ultrasound image of the plurality of ultrasound images.
- the method may further include transferring a slice of the 3D model to a corresponding 2D ultrasound image of the ultrasound volume using a neural network.
- FIG. 1 is a schematic illustration of a surgical robotic system including a control tower, a console, and one or more surgical robotic arms each disposed on a movable cart according to an embodiment of the present disclosure
- FIG. 3 is a perspective view of a movable cart having a setup arm with the surgical robotic arm of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure
- FIG. 4 is a schematic diagram of a computer architecture of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure
- FIG. 5 is a plan schematic view of movable carts of FIG. 1 positioned about a surgical table according to an embodiment of the present disclosure
- FIG. 6 is a method for intraoperative fusion of different imaging modalities according to an embodiment of the present disclosure
- FIG. 7 is a method of obtaining and registering multiple imaging modalities according to an embodiment of the present disclosure
- FIG. 8A is a computed tomography image according to an embodiment of the present disclosure
- FIG. 8B is a 3D model image according to an embodiment of the present disclosure.
- FIG. 8C is an endoscopic video image according to an embodiment of the present disclosure.
- FIG. 8D is an ultrasound image according to an embodiment of the present disclosure.
- FIG. 9 is a perspective view of a laparoscopic ultrasound probe according to an embodiment of the present disclosure.
- FIG. 10 is schematic diagram showing multiple coordinate systems and the transformation matrices to convert between them according to an embodiment of the present disclosure
- FIG. 11 shows ultrasound segmentation images according to an embodiment of the present disclosure
- FIG. 14 is a schematic diagram illustrating generation of an ultrasound volume from a plurality of 2D ultrasound slices according to an embodiment of the present disclosure
- FIG. 15 is a schematic flow chart illustrating registration between an intra-operative ultrasound volume and a pre-operative CT volume according to an embodiment of the present disclosure.
- the setup arm 61 includes a first link 62a, a second link 62b, and a third link 62c, which provide for lateral maneuverability of the robotic arm 40.
- the links 62a, 62b, 62c are interconnected at joints 63a and 63b, each of which may include an actuator (not shown) for rotating the links 62b and 62b relative to each other and the link 62c.
- the links 62a, 62b, 62c are movable in their corresponding lateral planes that are parallel to each other, thereby allowing for extension of the robotic arm 40 relative to the patient (e.g., surgical table).
- the robotic arm 40 may be coupled to the surgical table (not shown).
- the setup arm 61 includes controls 65 for adjusting movement of the links 62a, 62b, 62c as well as the lift 67.
- the setup arm 61 may include any type and/or number of joints.
- the third link 62c may include a rotatable base 64 having two degrees of freedom.
- the rotatable base 64 includes a first actuator 64a and a second actuator 64b.
- the first actuator 64a is rotatable about a first stationary arm axis which is perpendicular to a plane defined by the third link 62c and the second actuator 64b is rotatable about a second stationary arm axis which is transverse to the first stationary arm axis.
- the first and second actuators 64a and 64b allow for full three-dimensional orientation of the robotic arm 40.
- the actuator 48b of the joint 44b is coupled to the joint 44c via the belt 45a, and the joint 44c is in turn coupled to the joint 46b via the belt 45b.
- Joint 44c may include a transfer case coupling the belts 45a and 45b, such that the actuator 48b is configured to rotate each of the links 42b, 42c and a holder 46 relative to each other. More specifically, links 42b, 42c, and the holder 46 are passively coupled to the actuator 48b which enforces rotation about a pivot point “P” which lies at an intersection of the first axis defined by the link 42a and the second axis defined by the holder 46. In other words, the pivot point “P” is a remote center of motion (RCM) for the robotic arm 40.
- RCM remote center of motion
- the joints 44a and 44b include an actuator 48a and 48b configured to drive the joints 44a, 44b, 44c relative to each other through a series of belts 45a and 45b or other mechanical linkages such as a drive rod, a cable, or a lever and the like.
- the actuator 48a is configured to rotate the robotic arm 40 about a longitudinal axis defined by the link 42a.
- the holder 46 defines a second longitudinal axis and configured to receive an instrument drive unit (IDU) 52 (FIG. 1).
- the IDU 52 is configured to couple to an actuation mechanism of the surgical instrument 50 and the camera 51 and is configured to move (e.g., rotate) and actuate the instrument 50 and/or the camera 51.
- IDU 52 transfers actuation forces from its actuators to the surgical instrument 50 to actuate components an end effector 49 of the surgical instrument 50.
- the holder 46 includes a sliding mechanism 46a, which is configured to move the IDU 52 along the second longitudinal axis defined by the holder 46.
- the holder 46 also includes a joint 46b, which rotates the holder 46 relative to the link 42c.
- the robotic arm 40 also includes a plurality of manual override buttons 53 (FIG. 1) disposed on the IDU 52 and the setup arm 61, which may be used in a manual mode. The user may press one or more of the buttons 53 to move the component associated with the button 53.
- each of the computers 21, 31, 41 of the surgical robotic system 10 may include a plurality of controllers, which may be embodied in hardware and/or software.
- the computer 21 of the control tower 20 includes a controller 21a and safety observer 21b.
- the controller 21a receives data from the computer 31 of the surgeon console 30 about the current position and/or orientation of the hand controllers 38a and 38b and the state of the foot pedals 36 and other buttons.
- the controller 21a processes these input positions to determine desired drive commands for each joint of the robotic arm 40 and/or the IDU 52 and communicates these to the computer 41 of the robotic arm 40.
- the controller 21a also receives the actual joint angles measured by encoders of the actuators 48a and 48b and uses this information to determine force feedback commands that are transmitted back to the computer 31 of the surgeon console 30 to provide haptic feedback through the hand controllers 38a and 38b.
- the safety observer 21b performs validity checks on the data going into and out of the controller 21a and notifies a system fault handler if errors in the data transmission are detected to place the computer 21 and/or the surgical robotic system 10 into a safe state.
- Each of joints 63a and 63b and the rotatable base 64 of the setup arm 61 are passive joints (i.e., no actuators are present therein) allowing for manual adjustment thereof by a user.
- the joints 63a and 63b and the rotatable base 64 include brakes that are disengaged by the user to configure the setup arm 61.
- the setup arm controller 41b monitors slippage of each of joints 63a and 63b and the rotatable base 64 of the setup arm 61, when brakes are engaged or can be freely moved by the operator when brakes are disengaged, but do not impact controls of other joints.
- the robotic arm controller 41c controls each joint 44a and 44b of the robotic arm 40 and calculates desired motor torques required for gravity compensation, friction compensation, and closed loop position control of the robotic arm 40.
- the robotic arm controller 41c calculates a movement command based on the calculated torque.
- the calculated motor commands are then communicated to one or more of the actuators 48a and 48b in the robotic arm 40.
- the actual joint positions are then transmitted by the actuators 48a and 48b back to the robotic arm controller 41c.
- the IDU controller 41d receives desired joint angles for the surgical instrument 50, such as wrist and jaw angles, and computes desired currents for the motors in the IDU 52.
- the IDU controller 41 d calculates actual angles based on the motor positions and transmits the actual angles back to the main cart controller 41a.
- the robotic arm 40 is controlled in response to a pose of the hand controller controlling the robotic arm 40, e.g., the hand controller 38a, which is transformed into a desired pose of the robotic arm 40 through a hand eye transform function executed by the controller 21a.
- the hand eye function as well as other functions described herein, is/are embodied in software executable by the controller 21a or any other suitable controller described herein.
- the pose of one of the hand controllers 38a may be embodied as a coordinate position and roll-pitch-yaw (RPY) orientation relative to a coordinate reference frame, which is fixed to the surgeon console 30.
- the desired pose of the instrument 50 is relative to a fixed frame on the robotic arm 40.
- the pose of the hand controller 38a is then scaled by a scaling function executed by the controller 21a.
- the coordinate position may be scaled down and the orientation may be scaled up by the scaling function.
- the controller 21a may also execute a clutching function, which disengages the hand controller 38a from the robotic arm 40.
- the controller 21a stops transmitting movement commands from the hand controller 38a to the robotic arm 40 if certain movement limits or other thresholds are exceeded and in essence acts like a virtual clutch mechanism, e.g., limits mechanical input from effecting mechanical output.
- the desired pose of the robotic arm 40 is based on the pose of the hand controller 38a and is then passed by an inverse kinematics function executed by the controller 21a.
- the inverse kinematics function calculates angles for the joints 44a, 44b, 44c of the robotic arm 40 that achieve the scaled and adjusted pose input by the hand controller 38a.
- the calculated angles are then passed to the robotic arm controller 41c, which includes a joint axis controller having a proportional-derivative (PD) controller, the friction estimator module, the gravity compensator module, and a two-sided saturation block, which is configured to limit the commanded torque of the motors of the joints 44a, 44b, 44c.
- PD proportional-derivative
- the surgical robotic system 10 is setup around a surgical table 90.
- the system 10 includes movable carts 60a-d, which may be numbered “1” through “4.”
- each of the carts 60a-d are positioned around the surgical table 90.
- Position and orientation of the carts 60a-d depends on a plurality of factors, such as placement of a plurality of access ports 55a-d, which in turn, depends on the surgery being performed.
- the access ports 55a-d are inserted into the patient, and carts 60a-d are positioned to insert instruments 50 and the laparoscopic camera 51 into corresponding ports 55a-d.
- each of the robotic arms 40a-d is attached to one of the access ports 55a-d that is inserted into the patient by attaching the latch 46c (FIG. 2) to the access port 55 (FIG. 3).
- the IDU 52 is attached to the holder 46, followed by the SIM 43 being attached to a distal portion of the IDU 52.
- the instrument 50 is attached to the SIM 43.
- the instrument 50 is then inserted through the access port 55 by moving the IDU 52 along the holder 46.
- the SIM 43 includes a plurality of drive shafts configured to transmit rotation of individual motors of the IDU 52 to the instrument 50 thereby actuating the instrument 50.
- the SIM 43 provides a sterile barrier between the instrument 50 and the other components of robotic arm 40, including the IDU 52.
- the SIM 43 is also configured to secure a sterile drape (not shown) to the IDU 52.
- a method for intraoperative fusion of different imaging modalities includes combining preoperative imaging and intraoperative imaging to provide a combined 3D image of an organ, tumor, or any other tissue as well as overlays of the different modalities.
- Preoperative imaging includes any suitable imaging modality such as computed tomography (CT), magnetic resonance imaging (MRI), or any other imaging modality capable of obtaining 3D images as shown in FIG. 8A.
- Intraoperative imaging may be ultrasound imaging.
- CT imaging is well-suited for preoperative use since CT provides high quality images. However, intraoperative CT use is undesirable due to radiation exposure and supine positioning of the patient.
- Ultrasound imaging is well-suited for intraoperative use since it is safe for frequent imaging regardless of the position of the patient, even though ultrasound provide noisy images with limited perspective.
- other imaging modalities such as gamma radiation, Raman spectroscopy, multispectral imaging, time-resolved fluorescence spectroscopy (ms-TRFS) probe, and auto fluorescence.
- the image processing device 56 receives preoperative images, which may be done by obtaining a plurality of 2D images and reconstructing a 3D volumetric image therefrom.
- preoperative images may be provided to any other computing device (e.g., outside the operating room) to perform the image processing steps described herein.
- FIG. 7 provides additional sub steps for each of the main steps of the method of FIG. 6.
- Step 100 includes multiple segmentation steps lOOa-d, namely, segmentation of the organ surface, vasculature, landmarks, and tumor.
- segmentation denotes obtaining a plurality of 2D slices or segments of an object.
- the image processing device 56 or another computing device generates a 3D model shown in FIG. 8B, which may be a wire mesh model based on the preoperative image.
- the image processing device 56 may generate the 3D model including a plurality of points or vertices interconnected by line segments based on the segmentations and include a surface texture over of the vertices and segments.
- Steps 100 and 102 are performed preoperatively, with subsequent steps being performed once the surgical procedure has commenced, which includes setting up the robotic system 10 as shown in FIG. 5.
- the method of the present disclosure may be implemented using a stand-alone imaging system 80 (FIG. 10), the laparoscopic camera 51, and a laparoscopic ultrasound probe 70, which are coupled to the image processing device 56, and one or more screens 32 and 34 (FIG. 1).
- the ultrasonic probe 70 is inserted through one of the access ports 55a-d and may be controlled by one of the robotic arms 40a-d and corresponding IDU 52.
- the ultrasound probe 70 includes an ultrasound transducer 72 configured to output ultrasound waves.
- the laparoscopic camera 51 is positioned such that the surgical site is within its field of view, and the ultrasound probe 70 is then also moved into the field of view of the laparoscopic camera 51.
- the image processing device 56 localizes the ultrasound probe 70.
- the image processing device 56 may store (in memory or storage) dimensions of the ultrasound probe 70, and positions and distances between key points or virtual fiducial markers 74.
- the image processing device 56 analyzes all stereo pair frames (left and right channel images) from the video stream to identify a plurality of the key points or virtual fiducial markers 74 (FIG. 9), which then enables the image processing device 56 to determine 3D dimensions in the video stream, e.g., depth mapping.
- the virtual fiducial markers 74 are generated by a machine learning image processing algorithm configured to identify geometry of the ultrasound probe 70.
- the image processing device 56 is configured to execute the image processing algorithm, which may include deep learning model to estimate 6 degrees of freedom (DoF) pose of a rigid object (i.e., ultrasound probe 70) from stereo or monocular laparoscopic camera images.
- Realistic training data for probe localization for the deep learning model is provided by a custom synthetic data generation pipeline.
- Synthetic 3D anatomically accurate surgical site is developed based on actual data from surgical procedures.
- the ultrasound probe may be rendered on surgical site using the 3D virtual (e.g., computer aided drafting) model of the probe and stereo laparoscopic camera geometry from camera calibration.
- Localization may be based on image processing by the image processing device 56 as described above, or may additionally also include kinematics data of the robotic arm 40 moving the ultrasound probe 70.
- Kinematics data includes position, velocity, pose, orientation, joint angles, and other data based on the movement commands provided to the robotic arm 40 and execution of the commands by the robotic arm.
- other tracking techniques may also be used, such as electromagnetic tracking.
- the ultrasound probe 70 may be held by the instrument 50 as shown in FIGS. 8C and 10.
- the image processing device localizes the instrument 50 using the same deep learning algorithm for identifying virtual fiducial markers as described above with respect to step 106.
- the image processing device 56 may be further configured to estimate the articulated pose and orientation of the instrument 50 holding the ultrasound probe 70.
- the image processing device 56 may be configured to estimate the pose and orientation of the ultrasound probe 70 by combining the pose and orientation of the probe 70 as well as the pose and orientation of the instrument 50 holding the probe 70.
- the WCS may be tied to the access port 55 from which camera 51 is inserted, in which case, the location of access port 55 on patient anatomy is estimated from one or more external cameras (not shown), which may be mounted on the mobile carts 60 a-d and/or system tower 10 or other suitable locations.
- localization of the camera includes depth mapping from stereo pair followed by DV-SLAM on the successive stereo image pairs over time. Depth mapping provides a single frame snapshot of how far objects are from camera, whereas DV- SLAM provides for localization of the camera in WCS.
- the ultrasound probe 70 and the instrument 50 are localized in the video feed by the image processing device 56.
- the ultrasound probe 70 and the instrument 50 are localized in the WCS.
- the WCS is either tied to the access port 55 from which camera 51 is inserted as explained above.
- the WCS can also be tied to the access port 55 from which one of the instruments 50, i.e., the grasper instrument 50 manipulating the ultrasound probe 70, is inserted.
- the location of access port 55 in the patient is estimated from segmenting the shaft of the instrument 50 in plurality of images captured by the camera 51 and computing the intersection between lines fit to the shafts, hence localizing the remote center of motion (RCM) of the access port 55 of the instrument 50.
- the image processing device 56 is configured to dispose the location and orientation of ultrasound probe 70 in each image with respect to the WCS, hence transferring all images with respect to a fixed frame of reference.
- the ultrasound probe 70 is used to obtain ultrasound images of the tissue at step 118.
- the ultrasound probe 70 is used to perform segmentation of vessels, landmarks, tumor at steps 118a, 118b, 118c, respectively.
- the image processing device 56 may include ultrasound image processors and other components for displaying the ultrasound images alongside the video images from the laparoscopic camera 51 in any suitable fashion e.g., on separate screens 32 and 34, picture-in-picture, overlays, etc.
- image processing device 56 is also configured to construct a 3D ultrasound volume from the segmented ultrasound images.
- FIG. 11 shows exemplary segmented ultrasound image slices and volumes reconstructed based on sub-tissue landmarks (e.g., critical structures, tumor, veins, arteries, etc.). Segmented landmarks may be displayed as different colored overlays on ultrasound images displayed on the screen 32 and/or 34.
- sub-tissue landmarks e.g., critical structures, tumor, veins, arteries, etc.
- Ultrasound volume may be generated using a method that relies on the 6 DoF probe pose estimation from calibrated stereo endoscope images.
- the ultrasound probe is localized in 3D space by 6 DoF probe pose estimation from stereo endoscope images in the stereo endoscope frame of reference.
- the stereo endoscope camera itself may be localized in 3D space in the WCS of reference tied to the access port 55 into which the camera 51 is inserted, or in the WCS of reference tied to the access port 55 into which the grasper instrument 50 is inserted.
- Stereo reconstruction and DV-SLAM are used to update the position of the camera 51 from the images provided by the camera itself.
- the image processing device may be also configured to generate the ultrasound volume by computing the value of each ultrasound voxel and interpolating between the values of ultrasound image slice pixels that overlap the corresponding voxels after placing each ultrasound image in the world coordinate system.
- 2D ultrasound images represented by their planar equations using 3 points and value of 3D ultrasound voxel between slices computed from distance weighted orthogonal projection.
- pre-operative model registration Prior to registration of 3D model to ultrasound images/volume common landmarks in the preoperative (e.g., CT/MRI) and intraoperative images (e.g., ultrasound and laparoscopic video feed) are identified. This involves ultrasound sub-tissue structure identification (vessels, tumor, etc.) through ultrasound segmentation followed by matching and aligning the corresponding landmarks in pre-operative imaging/model (i.e., same vessel, tumor, etc.).
- Pre-operative model registration includes the following: identification of anatomical landmarks in pre-operative imaging/model (organ surface, sub-surface internal critical structures, e.g., vessels, tumor); identification of surface anatomical landmarks in stereo endoscope images; and segmentation of sub-surface internal critical structures, e.g., vessels, tumor.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Vascular Medicine (AREA)
- Image Processing (AREA)
- Manipulator (AREA)
Abstract
Un système d'imagerie comprend une caméra laparoscopique configurée pour capturer un flux vidéo de tissu et un dispositif d'imagerie peropératoire configuré pour être inséré à travers un orifice d'accès et pour obtenir une pluralité de signaux à partir d'un tissu. Le système comprend également un dispositif de traitement d'image configuré pour : générer une reconstruction 3D d'un site chirurgical à partir du flux vidéo de caméra laparoscopique pour estimer un emplacement 3D du dispositif d'imagerie peropératoire dans un cadre de référence de la caméra laparoscopique et localiser la caméra laparoscopique et le dispositif d'imagerie peropératoire dans un système de coordonnées universelles sur la base de la reconstruction 3D d'un site chirurgical. Le dispositif de traitement d'image est en outre configuré pour recevoir une image volumétrique de tissu formée à partir d'une modalité d'imagerie préopératoire et générer une représentation multi-trame à partir d'une pluralité de signaux provenant du dispositif d'imagerie peropératoire. Le dispositif de traitement d'image est également configuré pour enregistrer la représentation multi-trame avec l'image volumétrique du tissu ; déformer l'image volumétrique du tissu selon la représentation multi-trame ; et générer une superposition de l'image volumétrique de tissu et de la représentation multi-trame. Le système comprend en outre un écran configuré pour afficher le flux vidéo montrant des données sur la base de la pluralité de signaux provenant du dispositif d'imagerie peropératoire et du recouvrement s'étendant à partir du dispositif d'imagerie peropératoire.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263400525P | 2022-08-24 | 2022-08-24 | |
| US202263428204P | 2022-11-28 | 2022-11-28 | |
| PCT/IB2023/058368 WO2024042468A1 (fr) | 2022-08-24 | 2023-08-23 | Système robotique chirurgical et procédé de fusion peropératoire de différentes modalités d'imagerie |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4577121A1 true EP4577121A1 (fr) | 2025-07-02 |
Family
ID=87929132
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP23764720.1A Pending EP4577121A1 (fr) | 2022-08-24 | 2023-08-23 | Système robotique chirurgical et procédé de fusion peropératoire de différentes modalités d'imagerie |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4577121A1 (fr) |
| CN (1) | CN119816252A (fr) |
| WO (1) | WO2024042468A1 (fr) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120203763B (zh) * | 2025-05-28 | 2025-08-15 | 首都医科大学附属北京安贞医院南充医院·南充市中心医院 | 一种基于图像处理的手术辅助系统及方法 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017157970A1 (fr) * | 2016-03-16 | 2017-09-21 | Koninklijke Philips N.V. | Dispositif de calcul permettant de superposer une image laparoscopique et une image échographique |
| US11918306B2 (en) * | 2017-02-14 | 2024-03-05 | Intuitive Surgical Operations, Inc. | Multi-dimensional visualization in computer-assisted tele-operated surgery |
-
2023
- 2023-08-23 EP EP23764720.1A patent/EP4577121A1/fr active Pending
- 2023-08-23 WO PCT/IB2023/058368 patent/WO2024042468A1/fr not_active Ceased
- 2023-08-23 CN CN202380061378.XA patent/CN119816252A/zh active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| CN119816252A (zh) | 2025-04-11 |
| WO2024042468A1 (fr) | 2024-02-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8498691B2 (en) | Robotic catheter system and methods | |
| KR102117273B1 (ko) | 수술 로봇 시스템 및 그 제어 방법 | |
| US20150320514A1 (en) | Surgical robots and control methods thereof | |
| WO2019006028A1 (fr) | Systèmes et procédés pour projeter une image endoscopique sur un volume tridimensionnel | |
| US20230363834A1 (en) | Real-time instrument position identification and tracking | |
| US11948226B2 (en) | Systems and methods for clinical workspace simulation | |
| US20240324856A1 (en) | Surgical trocar with integrated cameras | |
| WO2024238729A2 (fr) | Système robotique chirurgical et procédé de génération de jumeau numérique | |
| US12011236B2 (en) | Systems and methods for rendering alerts in a display of a teleoperational system | |
| EP4577121A1 (fr) | Système robotique chirurgical et procédé de fusion peropératoire de différentes modalités d'imagerie | |
| WO2024006729A1 (fr) | Placement de port assisté pour chirurgie assistée par robot ou minimalement invasive | |
| WO2024201216A1 (fr) | Système robotique chirurgical et méthode pour empêcher une collision d'instrument | |
| US20240156325A1 (en) | Robust surgical scene depth estimation using endoscopy | |
| US12310670B2 (en) | System and method related to registration for a medical procedure | |
| EP4654912A1 (fr) | Système robotique chirurgical et procédé de placement d'orifice d'accès assisté | |
| WO2022147074A1 (fr) | Systèmes et procédés de suivi d'objets traversant une paroi corporelle pour des opérations associées à un système assisté par ordinateur | |
| US20240137583A1 (en) | Surgical robotic system and method with multiple cameras | |
| WO2025078950A1 (fr) | Système robotique chirurgical et procédé de commande intégrée de données de modèle 3d | |
| WO2024150077A1 (fr) | Système robotique chirurgical et procédé de communication entre une console de chirurgien et un assistant de chevet | |
| WO2025181641A1 (fr) | Système robotique chirurgical pour l'affichage non obstructif d'images ultrasonores peropératoires en superposition | |
| WO2025101531A1 (fr) | Pré-rendu graphique de contenu virtuel pour systèmes chirurgicaux | |
| WO2025064440A1 (fr) | Enregistrement et suivi assistés par ordinateur de modèles d'objets anatomiques | |
| WO2025172381A1 (fr) | Système chirurgical robotisé pour guidage optimal de sonde échographique | |
| WO2024150088A1 (fr) | Système robotique chirurgical et méthode de navigation d'instruments chirurgicaux | |
| WO2025184368A1 (fr) | Rétroaction de force basée sur l'anatomie et guidage d'instrument |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250320 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) |