WO2025064440A1 - Enregistrement et suivi assistés par ordinateur de modèles d'objets anatomiques - Google Patents
Enregistrement et suivi assistés par ordinateur de modèles d'objets anatomiques Download PDFInfo
- Publication number
- WO2025064440A1 WO2025064440A1 PCT/US2024/047116 US2024047116W WO2025064440A1 WO 2025064440 A1 WO2025064440 A1 WO 2025064440A1 US 2024047116 W US2024047116 W US 2024047116W WO 2025064440 A1 WO2025064440 A1 WO 2025064440A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- model
- segment
- anatomical
- control system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
Definitions
- the present disclosure relates generally to surgical systems. Specifically, the present disclosure relates to a surgical system that registers and tracks models of anatomical objects.
- a computer system for adjusting models of anatomical objects includes a memory and a processor communicatively coupled to the memory.
- the processor receives a first image of a plurality of anatomical objects and segments the first image into a plurality of segments.
- a first segment of the plurality of segments shows an anatomical object of the plurality of anatomical objects.
- the processor also receives a user selection of the first segment and receives a model that includes a submodel of the anatomical object shown in the first segment.
- the processor further moves the model until the submodel of the anatomical object aligns with the anatomical object shown in the first segment.
- a method includes receiving a first image of a plurality of anatomical objects of a patient and segmenting the first image into a plurality of segments.
- a first segment of the plurality of segments shows an anatomical object of the plurality of anatomical objects.
- the method also includes receiving a user selection of the first segment and receiving a model that includes a submodel of the anatomical object shown in the first segment.
- the method further includes moving the model until the submodel of the anatomical object aligns with the anatomical object shown in the first segment.
- Other embodiments includes a non-transitory machine- readable medium storing instructions that, when executed by a processor, cause the processor to perform the method.
- Figures 1A through 1 C illustrate an example surgical system.
- Figure 2A illustrates an example surgical system.
- Figure 2B illustrates an example medical instrument system in the surgical system of Figure 2A.
- Figure 2C illustrates an example portion of the medical instrument system of Figure 2B.
- Figure 3 illustrates an example operation in a surgical system.
- Figure 4 illustrates an example control system in a surgical system.
- Figure 5 illustrates an example control system in a surgical system.
- Figure 6 illustrates an example control system in a surgical system.
- Figure 7 illustrates an example control system in a surgical system.
- Figure 8 illustrates an example control system in a surgical system.
- Figure 9 is a flowchart of an example method performed in a surgical system.
- Figure 10 is a flowchart of an example method performed in a surgical system.
- Figure 11 illustrates an example model adjustment in a surgical system.
- a surgical system includes an endoscope that is inserted into a patient’s body so that the endoscope captures video or an image stream of an anatomical object on which the doctor will be operating.
- the video or images provide a limited view or perspective of the anatomical object, especially if other structures block or occlude portions of the anatomical object.
- the surgical system also provide the doctor a model (e.g., a two-dimensional or three-dimensional model) of the anatomical object.
- the model serves as a map that helps the doctor understand what portion of the anatomical object is being viewed and what parts of the anatomical object might be hidden or occluded from view.
- the model provides little help to the doctor, however, if the model is not aligned with the video or image stream. For example, if the video or image stream provides a view of an anatomical object from the top-down, but the model presents a view of the anatomical object from the bottom-up, then it will be difficult for the doctor to glean useful information from the model during the operation.
- the doctor is required to manually manipulate the model to align the model with the video or image stream, but this process takes a significant amount of time that may not be available during the operation. Additionally, as the operation progresses, the doctor moves the endoscope to change the view of the anatomical object, which requires the doctor to use more time to manually manipulate the model to re-align the model with the changed view.
- the present disclosure describes a surgical system that automatically aligns a model of anatomical objects with a view of those anatomical objects in a video or image stream (which may also be referred to as registering the model).
- the surgical system receives an image of multiple anatomical objects and segments (e.g., using a computer vision process) the image to distinguish the various structures that appear in the image, such as the anatomical objects.
- the surgical system requests the doctor to select one of the anatomical objects by selecting a segment in the segmented image that shows the anatomical object.
- the selected anatomical object serves as a primary label for the anatomy shown in the video or image stream.
- the parenchyma may be the primary label for the kidney.
- the doctor may select the segment of the segmented image that shows the parenchyma.
- the surgical system may label, on the model, the structure that serves as the primary label to assist the doctor in determining which of the segments of the segmented image shows the primary label.
- the surgical system then automatically aligns a model of the anatomical objects with the anatomical objects shown in the video or image stream.
- the surgical system may automatically align a model of the structures of the kidney with the view shown in the video or image stream.
- the surgical system may rotate or translate the model of the structures of the kidney so that the parenchyma in the model aligns with the view of the parenchyma shown in the video or image stream.
- the surgical system may first rotate the model out-of-plane (e.g., rotate the model about an axis or vector residing within a viewing plane of the model) so that the orientation of the model is consistent with the orientation of the patient.
- the surgical system may then set the depth of the model to be the same as the depth of the anatomical object shown in the video or image stream.
- the surgical system may then rotate the model in-plane (e.g., rotate the model about an axis or vector extending out of the viewing plane of the model) so that the model aligns with the anatomical object in the video or image stream.
- These movements e.g., rotations, depth adjustments, translations, etc.
- the surgical system may, upon detecting a change in the view provided in the video or image stream, re-align the model in response to the changed view.
- the surgical system provides several technical advantages. For example, the surgical system reduces the amount of time that it takes to align the model with the view shown in the video or image stream. When time is of the essence during the operation, the surgical system enhances the safety and success rate of the operation. As another example, the surgical system provides a more accurate alignment of the model with the view in the video or image stream relative to having the doctor manually align the model with the view. As a result, the surgical system improves the information that the doctor gleans from the model during the operation, which improves the safety and success rate of the operation.
- one or more components of a surgical system may be implemented as a computer-assisted surgical system.
- Figure 1A shows an example computer-assisted surgical system 100 that may implement some of the features described herein.
- the surgical system 100 may include a manipulator assembly 102, a user control apparatus 104, and an auxiliary apparatus 106, all of which are communicatively coupled to each other.
- the surgical system 100 may be utilized by a medical team to perform a computer-assisted medical procedure or other similar operation on a body of a patient 108 or on any other body as may serve a particular implementation.
- the medical team may include a first user 110-1 (such as a surgeon for a surgical procedure), a second user 110-2 (such as a patient-side assistant), a third user 110-3 (such as another assistant, a nurse, a trainee, etc.), and a fourth user 110-4 (such as an anesthesiologist for a surgical procedure), all of whom may be collectively referred to as users 110, and each of whom may control, interact with, or otherwise be a user of the surgical system 100. More, fewer, or alternative users may be present during a medical procedure as may serve a particular implementation. For example, team composition for different medical procedures, or for non-medical procedures, may differ and include users with different roles.
- Figure 1A illustrates an ongoing minimally invasive medical procedure such as a minimally invasive surgical procedure
- the surgical system 100 may similarly be used to perform open medical procedures or other types of operations.
- operations such as exploratory imaging operations, mock medical procedures used for training purposes, and/or other operations may also be performed.
- the manipulator assembly 102 may include one or more manipulator arms 112 (e.g., manipulator arms 112-1 through 112-4) to which one or more instruments may be coupled.
- the instruments may be used for a computer-assisted surgical procedure on the patient 108 (e.g., by being at least partially inserted into the patient 108 and manipulated within the patient 108). While the manipulator assembly 102 is depicted and described herein as including four manipulator arms 112, the manipulator assembly 102 may include a single manipulator arm 112 or any other number of manipulator arms as may serve a particular implementation.
- one or more instruments may be partially or entirely manually controlled, such as by being handheld and controlled manually by a person. These partially or entirely manually controlled instruments may be used in conjunction with, or as an alternative to, computer-assisted instrumentation that is coupled to the manipulator arms 112.
- the user control apparatus 104 may facilitate teleoperational control by the user 110-1 of the manipulator arms 112 and instruments attached to the manipulator arms 112.
- the user control apparatus 104 may provide the user 110-1 with imagery of an operational area associated with the patient 108 as captured by an imaging device.
- the manipulator arms 112 or any instruments coupled to the manipulator arms 112 may mimic the dexterity of the hand, wrist, and fingers of the user 110-1 across multiple degrees of freedom of motion.
- the user 110-1 may intuitively perform a procedure (e.g., an incision procedure, a suturing procedure, etc.) using one or more of the manipulator arms 112 or any instruments coupled to the manipulator arms 112.
- the auxiliary apparatus 106 may include one or more computing devices that perform auxiliary functions in support of the procedure, such as providing insufflation, electrocautery energy, illumination or other energy for imaging devices, image processing, or coordinating components of the surgical system 100.
- the auxiliary apparatus 106 may include a display monitor 114 that displays one or more user interfaces, or graphical or textual information in support of the procedure.
- the display monitor 114 may be a touchscreen display that provides user input functionality.
- Augmented content provided by a region-based augmentation system may be similar, or differ from, content associated with the display monitor 114 or one or more display devices in the operation area (not shown).
- the manipulator assembly 102, user control apparatus 104, and auxiliary apparatus 106 may be communicatively coupled one to another in any suitable manner.
- the manipulator assembly 102, user control apparatus 104, and auxiliary apparatus 106 may be communicatively coupled by way of control lines 116, which may represent any wired or wireless communication link as may serve a particular implementation.
- manipulator assembly 102, user control apparatus 104, and auxiliary apparatus 106 may each include one or more wired or wireless communication interfaces, such as one or more local area network interfaces, Wi-Fi network interfaces, cellular interfaces, and so forth.
- FIG. 1 B illustrates an example manipulator assembly 102.
- the manipulator assembly 102 includes a base 118, a manipulator arm 112- 1 , a manipulator arm 112-2, a manipulator arm 112-3, and a manipulator arm 112-4.
- Each manipulator arm 112-1 , 112-2, 112-3, and 112-4 is pivotably coupled to the base 118.
- the base 118 may include casters to allow ease of mobility, in some embodiments, the manipulator assembly 102 is fixedly mounted to a floor, ceiling, operating table, structural framework, or the like.
- two of the manipulator arms 112-1 , 112-2, 112-3, or 112-4 hold surgical instruments and a third holds a stereo endoscope.
- the remaining manipulator arms are available so that other instruments may be introduced at the work site.
- the remaining manipulator arms may be used for introducing another endoscope or another image capturing device, such as an ultrasound transducer, to the work site.
- Each of the manipulator arms 112-1 , 112-2, 112-3, and 112-4 may be formed of links that are coupled together and manipulated through actuatable joints.
- Each of the manipulator arms 112-1 , 112-2, 112-3, and 112-4 may include a setup arm and a device manipulator.
- the setup arm positions its held device so that a pivot point occurs at its entry aperture into the patient.
- the device manipulator may then manipulate its held device so that the held device may be pivoted about the pivot point, inserted into and retracted out of the entry aperture, and rotated about its shaft axis.
- Each of the manipulator arms 112-1 , 112-2, 112-3, and 112-4 may include sensors (e.g., kinematics sensors, position sensors, accelerometers, etc.) that detect or track movement of the manipulator arms 112-1 , 112-2, 112-3, and 112-4. For example, these sensors may detect how far or how quickly a manipulator arm 112-1 , 112-2, 112-3, or 112-4 moves in a certain direction.
- Figure 1 C illustrates an example user control apparatus 104.
- the user control apparatus 104 includes a stereo vision display 120 so that the user may view the surgical work site in stereo vision from images captured by the stereoscopic camera of the manipulator assembly 102.
- Left and right eyepieces 122 and 124 are provided in the stereo vision display 120 so that the user may view left and right display screens inside the display 120 respectively with the user's left and right eyes. While viewing typically an image of the surgical site on a suitable viewer or display, the surgeon performs the surgical procedures on the patient by manipulating master control input devices, which in turn control the motion of robotic instruments.
- the user control apparatus 104 also includes left and right input devices 126 and 128 that the user may grasp respectively with his/her left and right hands to manipulate devices (e.g., surgical instruments) being held by the manipulator arms 112-1 , 112-2, 112-3, and 112-3 of the manipulator assembly 102 in preferably six or more degrees of freedom (“DOF”).
- Left pedals 130 with toe and heel controls are provided on the user control apparatus 104 so the user may control movement and/or actuation of devices associated with the foot pedals.
- the processor may include any electronic circuitry, including, but not limited to one or a combination of microprocessors, microcontrollers, application specific integrated circuits (ASIC), application specific instruction set processor (ASIP), and/or state machines, that communicatively couples to a memory and controls the operation of the user control apparatus 104 and/or the auxiliary apparatus 106.
- the processor may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture.
- the processor may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components.
- ALU arithmetic logic unit
- the processor may include other hardware that operates software to control and process information.
- the processor executes software stored on a memory to perform any of the functions described herein.
- the processor controls the operation and administration of the user control apparatus 104 or the auxiliary apparatus 106 by processing information (e.g., information received from the user control apparatus 104, the manipulator assembly 102, the auxiliary apparatus 106, and/or a memory).
- the processor is not limited to a single processing device and may encompass multiple processing devices contained in the same device or computer or distributed across multiple devices or computers.
- the processor is considered to perform a set of functions or actions if the multiple processing devices collectively perform the set of functions or actions, even if different processing devices perform different functions or actions in the set.
- FIG. 2A illustrates an example surgical system 200 that may implement some of the features described herein.
- the surgical system 200 can be used, for example, in surgical, diagnostic, therapeutic, biopsy, or non-medical procedures.
- the surgical system 200 (which may be a robotically-assisted surgical system) includes one or more manipulator assemblies 202 for operating one or more medical instrument systems 204 in performing various procedures on a patient P positioned on a table T in a medical environment.
- the manipulator assembly 202 can drive catheter or end effector motion, can apply treatment to target tissue, and/or can manipulate control members.
- the manipulator assembly 202 can be teleoperated, non-teleoperated, or a hybrid teleoperated and non-teleoperated assembly with select degrees of freedom of motion that can be motorized and/or teleoperated and select degrees of freedom of motion that can be non-motorized and/or non-teleoperated.
- An operator input system 206 which can be inside or outside of the medical environment, generally includes one or more control devices for controlling the manipulator assembly 202.
- the manipulator assembly 202 supports a medical instrument system 204 and can optionally include a plurality of actuators or motors that drive inputs on the medical instrument system 204 in response to commands from a control system 212.
- the actuators can optionally include drive systems that when coupled to the medical instrument system 204 can advance the medical instrument system 204 into a natural or surgically created anatomic orifice.
- Other drive systems can move the distal end of the medical instrument in multiple degrees of freedom, which can include three degrees of linear motion (e.g., linear motion along the x, y, and z Cartesian axes) and in three degrees of rotational motion (e.g., rotation about the x, y, and z Cartesian axes).
- the manipulator assembly 202 can support various other systems for irrigation, treatment, or other purposes.
- Such systems can include fluid systems (e.g., reservoirs, heating/cooling elements, pumps, and valves), generators, lasers, interrogators, and ablation components.
- the surgical system 200 also includes a display system 210 for displaying an image or representation of the surgical site and a medical instrument system 204.
- the image or representation is generated by an imaging system 209, which may include an endoscopic imaging system.
- the display system 210 and operator input system 206 may be oriented so that an operator O can control the medical instrument system 204 and the operator input system 206 with the perception of telepresence.
- a graphical user interface can be displayable on the display system 210 and/or a display system of an independent planning workstation.
- the imaging system 209 includes an endoscopic imaging system with components that are integrally or removably coupled to the medical instrument system 204.
- a separate imaging device such as an endoscope, attached to a separate manipulator assembly can be used with the medical instrument system 204 to image the surgical site.
- the imaging system 209 can be implemented as hardware, firmware, software, or a combination thereof, which interact with or are otherwise executed by one or more computer processors, which can include the processors 214 of the control system 212.
- the surgical system 200 also includes a sensor system 208.
- the sensor system 208 may include a position/location sensor system (e.g., an actuator encoder or an electromagnetic (EM) sensor system) and/or a shape sensor system (e.g., an optical fiber shape sensor) for determining the position, orientation, speed, velocity, pose, and/or shape of the medical instrument system 204.
- EM electromagnetic
- shape sensor system e.g., an optical fiber shape sensor
- These sensors may also detect a position, orientation, or pose of the patient P on the table T. For example, the sensors may detect whether the patient P is face-down or face-up. As another example, the sensors may detect a direction in which the head of the patient P is directed.
- the sensor system 208 can also include temperature, pressure, force, or contact sensors, or the like.
- the surgical system 200 can also include a control system 212, which includes at least one memory 216 and at least one computer processor 214 for effecting control between the medical instrument system 204, the operator input system 206, the sensor system 208, and the display system 210.
- the control system 212 includes programmed instructions (e.g., a non-transitory machine-readable medium storing the instructions) to implement a procedure using the surgical system 200, including for navigation, steering, imaging, engagement feature deployment or retraction, applying treatment to target tissue (e.g., via the application of energy), or the like.
- the control system 212 may further include a virtual visualization system to provide navigation assistance to the operator O when controlling medical instrument system 204 during an image-guided surgical procedure.
- Virtual navigation using the virtual visualization system can be based upon reference to an acquired pre-operative or intra-operative dataset of anatomic passageways.
- the virtual visualization system processes images of the surgical site imaged using imaging technology, such as computerized tomography (CT), magnetic resonance imaging (MRI), fluoroscopy, thermography, ultrasound, optical coherence tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like.
- CT computerized tomography
- MRI magnetic resonance imaging
- fluoroscopy thermography
- ultrasound ultrasound
- OCT optical coherence tomography
- thermal imaging impedance imaging
- laser imaging laser imaging
- nanotube X-ray imaging and/or the like.
- the control system 212 uses a pre-operative image to locate the target tissue (using vision imaging techniques and/or by receiving user input) and create a pre-operative plan, including an optimal first location for performing treatment.
- the pre-operative plan can include, for example, a planned size to expand an expandable device, a treatment duration, a treatment temperature, and/or multiple deployment locations.
- the processor 214 is any electronic circuitry, including, but not limited to one or a combination of microprocessors, microcontrollers, application specific integrated circuits (ASIC), application specific instruction set processor (ASIP), and/or state machines, that communicatively couples to the memory 216 and controls the operation of the control system 212.
- the processor 214 may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture.
- the processor 214 may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components.
- ALU arithmetic logic unit
- the processor 214 may include other hardware that operates software to control and process information.
- the processor 214 executes software stored on the memory 216 to perform any of the functions described herein.
- the processor 214 controls the operation and administration of the control system 212 by processing information (e.g., information received from the manipulator assembly 202, the operator input system 206, and the memory 216).
- the processor 214 is not limited to a single processing device and may encompass multiple processing devices contained in the same device or computer or distributed across multiple devices or computers.
- the processor 214 is considered to perform a set of functions or actions if the multiple processing devices collectively perform the set of functions or actions, even if different processing devices perform different functions or actions in the set.
- the memory 216 may store, either permanently or temporarily, data, operational software, or other information for the processor 214.
- the memory 216 may include any one or a combination of volatile or non-volatile local or remote devices suitable for storing information.
- the memory 216 may include random access memory (RAM), read only memory (ROM), magnetic storage devices, optical storage devices, or any other suitable information storage device or a combination of these devices.
- the software represents any suitable set of instructions, logic, or code embodied in a computer-readable storage medium.
- the software may be embodied in the memory 216, a disk, a CD, or a flash drive.
- the software may include an application executable by the processor 214 to perform one or more of the functions described herein.
- the memory 216 is not limited to a single memory and may encompass multiple memories contained in the same device or computer or distributed across multiple devices or computers.
- the memory 216 is considered to store a set of data, operational software, or information if the multiple memories collectively store the set of data, operational software, or information, even if different memories store different portions of the data, operational software, or information in the set.
- Figure 2B illustrates an example medical instrument system 204 in the surgical system 200.
- the medical instrument system 204 is used in an image-guided medical procedure.
- the medical instrument system 204 may be used for non-teleoperational exploratory procedures or in procedures involving traditional manually operated medical instruments, such as endoscopy.
- the medical instrument system 204 includes an elongate flexible device 220, such as a flexible catheter or endoscope (e.g., gastroscope, bronchoscope), coupled to a drive unit 222.
- the elongate flexible device 220 includes a flexible body 224 having a proximal end 226 and a distal end, or tip portion, 228.
- the flexible body 224 has an approximately 14-20 millimeter outer diameter. Other flexible body outer diameters may be larger or smaller.
- the flexible body 224 has an appropriate length to reach certain portions of the anatomy, such as the lungs, sinuses, throat, or the upper or lower gastrointestional region, when the flexible body 224 is inserted into a patient’s oral or nasal cavity.
- the medical instrument system 204 includes a tracking system 230 for determining the position, orientation, speed, velocity, pose, and/or shape of the distal end 228 and/or of one or more segments 232 along the flexible body 224 using one or more sensors and/or imaging devices.
- the tracking system 230 is implemented as hardware, firmware, software, or a combination thereof, which interact with or are otherwise executed by one or more computer processors, which may include the processors 214 of control system 212.
- the tracking system 230 tracks distal the end 228 and/or one or more of the segments 232 using a shape sensor 234. In some embodiments, the tracking system 230 tracks the distal end 228 using a position sensor system 236, such as an electromagnetic (EM) sensor system. In some examples, the position sensor system 236 measures six degrees of freedom (e.g., three position coordinates x, y, and z and three orientation angles indicating pitch, yaw, and roll of a base point) or five degrees of freedom (e.g., three position coordinates x, y, and z and two orientation angles indicating pitch and yaw of a base point).
- six degrees of freedom e.g., three position coordinates x, y, and z and three orientation angles indicating pitch, yaw, and roll of a base point
- five degrees of freedom e.g., three position coordinates x, y, and z and two orientation angles indicating pitch and yaw of a base point.
- the flexible body 224 includes one or more channels 238 sized and shaped to receive one or more medical instruments 240.
- the flexible body 224 includes two channels 238 for separate instruments 240, however, a different number of channels 238 can be provided.
- Figure 2C illustrates an example portion of the medical instrument system 204 of Figure 2B. As seen in Figure 2C, the medical instrument 240 extends through the flexible body 224.
- the medical instrument 240 can be used for procedures and aspects of procedures, such as surgery, biopsy, ablation, mapping, imaging, illumination, irrigation, or suction.
- the medical instrument 240 is deployed through the channel 238 of the flexible body 224 and is used at a target location within the anatomy.
- the medical instrument 240 includes, for example, image capture devices, biopsy instruments, ablation instruments, catheters, laser ablation fibers, and/or other surgical, diagnostic, or therapeutic tools.
- the medical tools include end effectors having a single working member such as a scalpel, a blunt blade, a lens, an optical fiber, an electrode, and/or the like.
- Other end effectors include, for example, forceps, graspers, balloons, needles, scissors, clip appliers, and/or the like.
- Other end effectors further include electrically activated end effectors such as electrosurgical electrodes, transducers, sensors, imaging devices, and/or the like.
- the medical instrument 240 is advanced from the opening of the channel 238 to perform the procedure and then retracted back into the channel when the procedure is complete.
- the medical instrument 240 is removed from the proximal end 226 of the flexible body 224 or from another optional instrument port (not shown) along the flexible body 224.
- the medical instrument 240 may be used with an image capture device (e.g., an endoscopic camera) also within the elongate flexible device 220. Alternatively, the medical instrument 240 may itself be the image capture device.
- the medical instrument 240 additionally houses cables, linkages, or other actuation controls (not shown) that extend between the proximal and distal ends to controllably bend the distal end of the medical instrument 240.
- the flexible body 224 also houses cables, linkages, or other steering controls (not shown) that extend between the drive unit 222 and the distal end 228 to controllably bend the distal end 228 as shown, for example, by the broken dashed line depictions 242 of the distal end 228.
- at least four cables are used to provide independent “up- down” steering to control a pitch motion of the distal end 228 and “left-right” steering to control a yaw motion of the distal end 228.
- the drive unit 222 can include drive inputs that removably couple to and receive power from drive elements, such as actuators, of the teleoperational assembly.
- the medical instrument system 204 includes gripping features, manual actuators, or other components for manually controlling the motion of the medical instrument system 204.
- the information from the tracking system 230 can be sent to a navigation system 244, where the information is combined with information from the visualization system 246 and/or the preoperatively obtained models to provide the physician or other operator with real-time position information.
- Figure 3 illustrates an example operation in the surgical system 100 of Figure 1 A or the surgical system 200 of Figure 2A.
- a control system 300 (which may be implemented in the user control apparatus 104 and/or the auxiliary apparatus 106 of the surgical system 100 using the processing device 132 and/or in the control system 212 of the surgical system 200 using the processor 214 and the memory 216) determining the depth of one or more anatomical objects in a video or image stream.
- the operation begins with a camera 302 and a camera 304 being directed towards anatomical objects 306.
- the cameras 302 and 304 may form a stereo camera system positioned on an endoscope.
- the endoscope is inserted into a patient’s body and positioned near the anatomical objects 306.
- the cameras 302 and 304 are directed towards the anatomical objects 306 to produce videos 308 and 310, respectively.
- the videos 308 and 310 include streams of images or frames captured by the cameras 302 and 304. Each image or frame shows the anatomical objects 306.
- the cameras 302 and 304 be positioned near each other, and the cameras 302 and 304 provide different views of the anatomical objects 306.
- the cameras 302 and 304 may be directed at the same point on the anatomical objects 306, but the cameras 302 and 304 may be positioned at different locations and angled differently towards the anatomical objects 306.
- the videos 308 and 310 provide different views or perspectives of the anatomical objects 306.
- the cameras 302 and 304 communicate the videos 308 and 310 to the control system 300.
- the control system 300 receives the videos 308 and 310 from the cameras 302 and 304.
- the control system 300 also receives or stores camera parameters 312.
- the camera parameters 312 indicate aspects of the positioning of the cameras 302 and 304.
- the camera parameters 312 may include coordinates that indicate the positioning of the cameras 302 and 304 in the patient’s body.
- the camera parameters 312 may also indicate the relative positioning of the camera 302 and the camera 304.
- the camera parameters 312 may also indicate a linear distance between the cameras 302 and 304.
- the camera parameters 312 may indicate angles at which the cameras 302 and 304 are angled.
- the control system 300 uses images in the videos 308 and 310 along with the camera parameters 312 to determine the depth map 314.
- the depth map 314 is a two-dimensional arrangement of pixels that show an image of the video 308 or the video 310. Each pixel in the depth map 314 may be assigned three coordinates. The first two coordinates are x and y coordinates that indicate the position of the pixel in the two-dimensional depth map 314.
- the third coordinate indicates a depth of the object or structure represented by that pixel in the image of the video 308 or 310.
- the third coordinate is used to control a shading of the pixel in the depth map 314. For example, the greater the depth, the lighter the shading of the pixel. The smaller the depth, the darker the shading of the pixel.
- the depth map 314 is an image formed by pixels with shading that indicates the depth of an object in the image shown by the pixels.
- the control system 300 determines the depths of the pixels in the depth map 314 using the videos 308 and 310 and the camera parameters 312. For example, the control system 300 may select a pixel in an image of the video 308. The control system 300 determines a portion of an anatomical object 306 shown in the selected pixel. The control system 300 then locates or identifies a corresponding pixel in an image of the video 310. The corresponding pixel shows the same portion of the anatomical object 306. The control system 300 determines a difference in positioning between the selected pixel in the image of the video 308 and the corresponding pixel in the image of the video 310.
- the control system 300 performs geometric calculations using this difference and the camera parameters 312 to determine the depth of that portion of the anatomical object 306 in the image of the video 308.
- the control system 300 then assigns the depth to that pixel in the depth map 314.
- the control system 300 may repeat this process to determine the depths of each pixel in the depth map 314.
- Figure 4 illustrates an example control system 300.
- Figure 4 shows the control system 300 segmenting an image and receiving a selection of a segment of the segmented image from the doctor.
- the control system 300 receives the video 308, which is an image stream that includes an image 402.
- the video 308 and the image 402 are captured by the camera 302.
- the video 308 and the image 402 show one or more anatomical objects 306 in the patient’s body.
- the control system 300 may use computer vision processes to analyze the image 402 and to identify the various objects that appear in the image 402 (e.g., the anatomical objects 306).
- the control system 300 detects the objects in the image 402 and segments the image into segments that contain these objects. Through segmentation, the control system 300 determines the boundaries of the different objects appearing in the image 402, including the anatomical objects 306.
- the control system 300 determines that the image 402 includes certain objects, and the control system 300 also determines the boundaries of each of the objects.
- the control system 300 segments the image 402 into segments 404, 406, and 408 according to these determined boundaries.
- Each of the segments 404, 406, and 408 may form a portion of the image 402, and each segment 404, 406, and 408 may show or contain a different object in the image 402.
- the doctor selects a pixel or a point in one of the segments 404, 406, and 408 to indicate the selection 410.
- This selected pixel or point may be referred to as a seed point.
- the segment 404, 406, or 408 that includes the seed point is determined to be the selected segment 416.
- the control system 300 assists the doctor in selecting the primary label for the anatomy shown in the video 308 or the image 402.
- the control system 300 receives or stores a model 412 of the anatomical objects 306.
- the model 412 may be a three-dimensional model of the anatomical objects 306.
- the model 412 may be formed using submodels, with each submodel modeling one or more of the anatomical objects 306.
- the model 412 may show various structures of the kidney, with each submodel of the model 412 modeling one or more structures of the kidney.
- the model 412 may have been generated based on a pre-operative scan of the anatomical objects 306.
- the model 412 shows the outer surface and optionally, some of the interior of the anatomical objects 306.
- the model 412 may be manipulated (e.g., moved, rotated or zoomed) to show the different surfaces of the anatomical objects 306.
- the control system 300 may add an indicator 414 to the model 412 to designate the anatomical object 306 in the model 412 that is the primary label.
- the control system 300 may add a colored point (also referred to as an anchor point) onto the surface of the model 412 on the primary label.
- the control system 300 may color, highlight, or add a visual effect (e.g., flashing or strobing) onto the anatomical object 306 in the model 412 that is the primary label.
- the indicator 414 may indicate, on the model 412, the anatomical object 306 that serves as the primary label. The doctor views the indicator 414 on the model 412 to understand which object shown in the image 402 is the primary label.
- the indicator 414 may be positioned on the submodel of the parenchyma of the kidney.
- the indicator 414 may be a colored point added to the surface of the parenchyma shown in the model 412, or the indicator 414 may be a color, highlight, or visual effect placed on the parenchyma shown in the model 412.
- the doctor may view the model 412 along with the indicator 414 to understand that the parenchyma serves as the primary label.
- the doctor may select a segment 404, 406, or 408 in the video 308 or the image 402 that shows the parenchyma.
- the control system 300 uses rays to present the indicator 414 (which may be an anchor point) on the model 412. For example, the control system 300 may trace a virtual ray from the indicator 414 on the model 412 to the near plane of a view frustum. The control system 300 may also project the model 412 onto the near plane of the view frustum. The direction of the ray may be set according to the view direction provided to the doctor when viewing the model 412. Alternatively, the direction of the ray may be set according to an orientation of the input devices 126 and 128 or of a controller of the operator input system 206. In this manner, the control system 300 effectively projects the model 412 and the indicator 414 onto the near plane of the view frustum for the doctor to view.
- the doctor selects the selected segment 416 using any hardware, software, or combination thereof.
- the doctor may view the image 420 through a head-in system.
- the doctor operates a controller using the doctor’s hands to navigate a cursor to a point on the image 420.
- the doctor then indicates, using the controller, a selection of that point.
- the segment 404, 406, or 408 that contains the selected point may serve as the selected segment 416.
- the doctor may make multiple selections 410 to indicate multiple selected segments 416.
- Each of the selected segments 416 may show a different part of an anatomical structure. Using the previous example, each of the selected segments 416 may show a different structure of the kidney.
- Figure 5 illustrates an example control system 300.
- Figure 5 shows the control system 300 orienting the model 412.
- the control system 300 begins by reviewing the camera parameters 312 and a patient orientation 502.
- the camera parameters 312 indicate the positioning of the camera 302 that captured the video 308.
- the camera parameters 312 may include coordinates that indicate a positioning of the camera 302, as well as indications of the angle of the camera 302.
- the patient orientation 502 indicates the positioning and orientation of the patient 108 in the surgical system 100 or of the patient P on the table T in the surgical system 200.
- the patient orientation 502 may indicate whether the patient is face-up or face-down. Additionally, the patient orientation 502 may indicate in which direction the top of the head of the patient is directed.
- the sensor system 208 in the surgical system 200 includes sensors that detect the patient orientation 502. If the patient P is moved during the operation (e.g., rotated, flipped over, etc.), these sensors detect the movement and update the patient orientation 502 to be consistent with the new positioning of the patient P.
- sensors e.g., kinematics sensors
- the surgical system 100 e.g., on the manipulator assembly 102
- the control system 300 may determine the patient orientation 502 from the detected position or movement of these components. For example, the control system 300 may determine the type of procedure that is being performed along with the positioning and orientation of the manipulator arms 112. The control system 300 may then determine from this information whether the patient is face-up or face-down, and in which direction the head of the patient is directed.
- the control system 300 uses the camera parameters 312 and the patient orientation 502 to determine how the anatomical objects 306 in the model 412 should be oriented. Specifically, the control system 300 determines whether the anatomical objects 306 in the model 412 are flipped around or backwards relative to the actual position of the anatomical objects 306 in the patient. The control system 300 may rotate the model 412 out-of-plane to reorient the model 412, so that the model 412 more closely aligns with the anatomical objects 306 shown in the video 308 and in the patient.
- the model 412 may be presented in a viewing plane (e.g., on a display). The control system 300 rotates the model 412 out-of-plane by rotating the model 412 about an axis or a vector residing in the viewing plane.
- Figure 6 illustrates an example control system 300.
- Figure 6 shows the control system 300 setting the depth of the model 412 (which may be referred to as the registration depth).
- the control system 300 uses the depth map 314 to set the depth of the model 412.
- the selected segment 416 includes a mask that isolates or identifies the anatomical object 306 shown in the selected segment 416.
- the mask may include black pixels and white pixels. The white pixels may correspond to the anatomical object 306 and the black pixels may correspond with other parts of the image 402.
- the mask removes portions of the image 402 that do not show the anatomical object 306. That image is then intersected with the depth map 314 to determine the depth of the anatomical object 306 in the image 402.
- the control system 300 uses rays to select a segment and to determine the depth of the anatomical object 306 in the selected segment 416. For example, when the segment is selected in the image 402, the control system 300 may trace a virtual ray from a selected point in the selected segment 416 to a point on the depth map 314. The direction of the ray may be set according to the view direction provided to the doctor when viewing the image 402. Alternatively, the direction of the ray may be set according to an orientation of the input devices 126 and 128 or of a controller of the operator input system 206. The depth of the point on the depth map 314 serves as the depth of the selected point in the selected segment 416.
- Figure 7 illustrates an example control system 300.
- Figure 7 shows the control system 300 adjusting the model 412 so that the model 412 aligns with the orientation of the anatomical object 306 shown in the selected segment 416.
- the control system 300 begins by generating a point cloud 702 using the depth map 314 and the selected segment 416.
- the control system 300 applies the mask for the selected segment 416 to the depth map 314 to identify or isolate, in the depth map 314, the anatomical object 306 shown in the selected segment 416.
- the control system 300 then generates the point cloud 702 for the isolated object in the depth map 314.
- the point cloud 702 is an arrangement of points for the anatomical object 306 shown in the selected segment 416 at the depth indicated in the depth map 314.
- the control system 300 then generates a point cloud 704 for the model 412.
- the model 412 may have undergone the processes shown in Figures 5 and 6. Specifically, the model 412 may have been rotated out-of-plane so that the model 412 is oriented according to the camera parameters 312 and the patient orientation 502. Additionally, the model 412 may have been set at the depth indicated for the anatomical object shown in the selected segment 416 in the depth map 314. As a result, the point cloud 704 is an arrangement of points for the model 412.
- the point cloud 702 may have the same number of points or a different number of points as the point cloud 704.
- the control system 300 limits the point cloud 704 to the submodel of the anatomical object 306 in the model 412 that is shown in the selected segment 416.
- the control system 300 may generate the point cloud 704 using the submodel of the parenchyma in the model 412.
- the point cloud 704 includes points for the parenchyma.
- the control system 300 then rotates the model 412 in-plane until the model 412 aligns with the anatomical object 306 shown in the selected segment 416.
- the control system 300 rotates the model 412 in-plane by rotating the model 412 about an axis or vector extending out of the viewing plane (e.g. a viewing plane of a display).
- the axis or vector may be normal to the viewing plane or form a non-90 degree angle with the viewing plane.
- the control system 300 rotates the model 412 in-plane so that the point cloud 704 is close to or approximates the point cloud 702. After each rotation of the model 412, the control system 300 determines an error 706 between the point cloud 702 and the point cloud 704.
- the error 706 indicates by how much the point cloud 704 deviates from the point cloud 702.
- the error 706 may be a sum of the Euclidean distances between the corresponding points in the point cloud 702 and the point cloud 704. If the points in the point cloud 704 perfectly align with the points in the point cloud 702, then the error 706 is reduced to zero.
- the control system 300 compares the error 706 with a threshold 708 to determine whether the control system 300 should continue rotating the model 412 inplane. If the error 706 exceeds the threshold 708, then the control system 300 continues rotating the model 412 in plane to better align the point cloud 704 with the point cloud 702. If the error 706 does not exceed the threshold 708, then the control system 300 determines that the point cloud 704 is sufficiently aligned with the point cloud 702 and stops rotating the model 412 in-plane. Thus, by performing the operations shows in Figures 5, 6, and 7, the control system 300 moves the model 412 to align the model 412 with the view shown in the image 402.
- control system 300 may move the model 412 so that the anatomical object 306 in the model 412 that serves as the primary label aligns with the anatomical object 306 shown in the selected segment 416.
- the control system 300 registers the model 412 with the image 402.
- the model 412 has the same (or very similar) orientation and the same (or very similar) depth as the view shown in the image 402.
- the control system 300 tracks changes in the view shown in the video 308. The control system 300 then makes corresponding adjustments to the model 412 so that the model 412 remains aligned with the view shown in the video 308 or the image 402.
- the control system 300 identifies corresponding points in the point cloud 702 and the point cloud 704. These corresponding point pairs may be points in the point clouds 702 and 704 that are positioned near or on the same portion of the object.
- the control system 300 determines a translation (e.g., a rotation) that will bring these point pairs close to each other.
- the control system 300 then applies the transformation to the model 412 and determines the error 706. If the error 706 is below the threshold 708, then the control system 300 considers the model 412 registered or aligned with the view shown in the video 308 or the image 402. If the error 706 exceeds the threshold 708, then the control system 300 determines another transformation that will bring the point pairs even closer to each other. This process may continue until the error 706 is reduced below the threshold 708.
- the control system 300 determines multiple orientations for the model 412 that reduce the error 706 below the threshold 708 (and would thus be considered registered). For example, when there are symmetries in the model 412, the control system 300 may determine different rotations and orientations that cause the error 706 to be reduced below the threshold 708. In these instances, the control system 300 presents the multiple orientation candidates to the doctor for selection. The doctor views the candidates for the orientation of the model 412 and selects the candidate that shows the proper alignment with the image 402. In some embodiments, the control system 300 eliminates one or more candidates based on the camera parameters 312. For example, the camera parameters 312 may indicate a position or orientation of the camera 302. The control system 300 may determine from the position or orientation of the camera 302 that one or more of the candidates do not align with the view shown in the image 402. In response, the control system 300 may eliminate some of these candidates to make it easier for the doctor to select the correct candidate.
- the control system 300 may set a limit on the number of moves or adjustments that the control system 300 makes before terminating the registration process. For example, in some instances, the control system 300 may fail to bring the error 706 below the threshold 708 even after many iterations of moving the model 412. To prevent the control system 300 from endlessly moving or adjusting the model 412, the control system 300 may terminate the registration process after the number of moves or adjustments reaches the limit. The control system 300 may report that it failed to register the model 412 to alert the doctor that manual registration may be needed.
- the control system 300 may perform the operations shown in Figures 5, 6, and 7 for each or multiple of the selected segments 416 to generate multiple transformation candidates. For example, the control system 300 may translate, rotate out-of-plane, rotate in-plane, and/or adjust the depth of the model 412 to align the structure shown in a selected segment 416 with the corresponding structure shown in the video or image stream, which produces one transformation candidate. The control system 300 may repeat the operations for other selected segments 416 to align the structures shown in those selected segments 416 with the corresponding structures shown in the video or image stream to produce additional transformation candidates.
- the control system 300 may then present the transformation candidates to the doctor (e.g., on a display) and request the doctor to select the transformation candidate that best aligns with the video or image stream.
- the doctor may review the transformation candidates and make a selection.
- the control system 300 may then use the selected transformation candidate as the registered version of the model 412.
- the control system 300 overlays the registered model 412 onto the video or image stream so that the doctor may view the registered model 412 along with the video or image stream.
- the control system 300 overlays the registered model 412 onto the anatomical structure shown in the video or image stream such that the registered model 412 aligns with the anatomical structure.
- the control system 300 may provide different overlay options. For example, the control system 300 may generate the overlay with different backgrounds (e.g., a black background, shaded background, or a transparent or partially transparent background that blends with the video or image stream).
- the control system 300 may generate the overlay with different effects (e.g., virtual holes, shading, shadows, light projection, transparency, etc.) to improve the perception of the overlay on the video or image stream of a real anatomical structure.
- Figure 8 illustrates an example control system 300.
- Figure 8 shows the control system 300 tracking changes in the view shown in the video 308 and making corresponding adjustments to the model 412.
- the control system 300 receives the video 308, which includes an image stream.
- the image stream includes the image 402 and an image 802.
- the image 802 may be captured subsequently to the image 402.
- the image 402 and the image 802 may be frames of the video 308.
- the image 802 may be a frame that follows the image 402.
- the control system 300 tracks the positioning of a seed point 803 in the image 402 and the image 802.
- the seed point 803 may be a pixel or point in the selected segment 416 that the doctor selected when selecting the selected segment 416.
- the doctor may have selected the seed point 803 to indicate selection of the selected segment 416.
- the control system 300 detects changes in the views presented by the images 402 and 802. For example, if the camera 302 that captures the video 308 moves during the time between capturing the image 402 and the image 802, then the view provided in the image 802 may be different from the view provided in the image 402.
- the control system 300 analyzes the images 402 and 802 to detect an optical flow between the images 402 and 802. For example, the control system 300 may monitor the background or other objects in the images 402 and 802 to determine a movement of the background or the objects. The control system 300 then determines that the seed point 803 moved similarly from the image 402 to the image 802.
- the control system 300 determines a translation 804 based on the movement of the seed point 803 between the images 402 and 802.
- the translation 804 may be an expression that captures or describes how the seed point 803 moves from the image 402 to the image 802.
- the translation 804 may indicate a movement in-plane in the images 402 and 802. Additionally, the translation 804 may indicate a change in the depth from the image 402 to the image 802.
- the control system 300 applies the translation 804 to the model 412 so that the model 412 moves to align with the change in view from the image 402 to the image 802. For example, applying the translation 804 to the model 412 may cause the model 412 to rotate in-plane. Additionally, applying the translation 804 to the model 412 may adjust the depth of the model 412. As a result of applying the translation 804 to the model 412, the model 412 moves to align with the view shown in the image 802. In this manner, the control system 300 continues to move and adjust the model 412 as the camera 302 that captures the video 308 moves in the patient’s body and as the view of the video 308 changes. In this manner, the model 412 remains aligned with the view shown in the video 308, such that the model 412 continues to provide useful information to the doctor during the operation.
- the movement in the video 308 may be determined by analyzing sensor data.
- sensor data For example, kinematics sensors, shape sensors, position sensors (e.g., position on the manipulator assembly 102 or on the elongate flexible device 220) may detect movement.
- the control system 300 may analyze the data from the sensors to determine the movement that occurred in the video 308. For example, using this data, the control system 300 determines the position and orientation of the camera 302 as a matrix (e.g., a 4x4 transformation matrix). The control system 300 applies the inverse of this matrix to the model 412 to align the model 412 with the changed view in the video 308.
- a matrix e.g., a 4x4 transformation matrix
- the control system 300 receives the model 412 of the object.
- the model 412 may have been generated based on a pre-operative scan. For example, an endoscope may have been inserted into the patient’s body to capture images or video of anatomical objects.
- the model 412 may be a three-dimensional model of the anatomical objects that is generated based on the captured video or images of the anatomical objects. Because the orientation and the positioning of the endoscope used during the pre-operative scan may be different from the positioning and orientation of the endoscope used to capture the video 308, the model 412 may be positioned and oriented differently than the view shown in the video 308.
- the control system 300 segments the image 402 in block 906 to produce multiple segments 404, 406, and 408.
- the control system 300 may use computer vision techniques to identify or detect different objects in the image 402.
- the control system 300 may segment the image 402 into segments 404, 406, and 408 by determining or detecting the boundaries of these objects in the image 402.
- Each of the segments 404, 406, and 408 may contain or show one of the objects.
- the control system 300 receives the selection 410 of a segment 404, 406, or 408.
- the doctor performing the operation makes the selection 410 using any component of the surgical system (e.g., the input devices 126 and 128 or the operator input system 206). For example, the doctor may select a pixel in the image 402 that serves as the seed point 803. Because the control system 300 knows the boundaries of the segments 404, 406, and 408 in the image 402 through segmentation, the control system 300 determines which segment 404, 406, or 408 contains the seed point 803. The control system 300 then determines the segment 404, 406, or 408 that contains the seed point 803 as the selected segment 416.
- the doctor may select the selected segment 416 to indicate a primary label for the anatomical objects shown in the video 308 or the image 402.
- This primary label may be defined (e.g., in a lookup table) for the anatomical objects. For example, if the video 308 or the image 402 show the structures of the kidney, then the primary label may be defined as the parenchyma of the kidney.
- the doctor may select a pixel in the image 402 or the video 308 that shows or that is on the parenchyma.
- the control system 300 determines the segment 404, 406, or 408 that includes the selected pixel as the selected segment 416. In this manner, the doctor identifies the primary label for the anatomical objects in the image 402 or the video 308.
- the control system 300 moves the model 412 to register the model 412 with the view shown in the image 402 in block 910 (e.g., so that the anatomical object 306 in the model 412 aligns with the anatomical object 306 shown in the selected segment 416).
- the control system 300 may rotate the model 412 out-of-plane so that the model 412 is oriented with respect to the camera parameters 312 and the patient orientation 502.
- the control system 300 may also set the depth of the model 412 (which may be referred to as a registration depth) so that the model 412 is at the same depth as the anatomical object in the selected segment 416 as shown in the depth map 314.
- the control system 300 may also rotate the model 412 in-plane so that the model 412 aligns with the anatomical object in the selected segment 416 as shown in the image 402. In some embodiments, the control system 300 rotates the model 412 in-plane until the error 706 between the point cloud 702 for the depth map 314 and the point cloud 704 for the model 412 reduces below the threshold 708.
- FIG 10 is a flowchart of an example method 1000.
- the control system 300 performs the method 1000.
- the control system 300 tracks changes and views in the video 308 and continues moving or adjusting the model 412 so that the model 412 remains registered to the video 308.
- the control system 300 receives a first image 402 of an object.
- the control system 300 receives a second image 802 of the object.
- the first image 402 and the second image may be frames of the video 308.
- the second image 802 may be a frame that is captured later in time than the first image 402.
- the camera 302 that captures the video 308 may have moved, which changes the view in the video 308.
- the view of the object in the first image 402 may be different from the view of the object in the second image 802.
- the perspective of the object shown in the first image 402 may be different from the perspective of the object shown in the second image 802.
- the control system 300 determines the translation 804, which indicates how the seed point 803 moved from the image 402 to the image 802. For example, if the view in the second image 802 is different from the view shown in the first image 402, then the control system 300 may determine a movement of the background or objects in the image 402 to arrive at the image 802. This movement may be the same movement that occurred to the seed point 803. The control system 300 may then determine the translation 804 that describes the movement. The translation 804 may indicate a movement in the plane of the images 402 and 802. Additionally, the translation 804 may indicate a change in the depth.
- the control system 300 applies the translation 804 to the model 412. For example, if the translation 804 indicates a movement in-plane, then the control system 300 may apply that movement in-plane to the model 412. As another example, if the translation 804 indicates a change to depth, then the control system 300 may change the depth of the model 412 accordingly. In this manner, the control system 300 moves or adjusts the model 412 so that the model 412 remains registered to the video 308, even though the view in the video 308 changes over time.
- Figure 11 illustrates an example model adjustment.
- the process begins with the image 402.
- the image 402 shows a kidney.
- the control system 300 then segments the image 402 to produce a segmented image 1102.
- the segmented image 1102 shows the same structures as the image 402, and the segmented image 1102 shows the boundaries of the various structures in the image 402.
- the segmented image 1102 shows the boundaries of the parenchyma in the image 402.
- the segmented image 1102 includes multiple segments, with each segment showing or containing a structure identified in the image 402.
- the doctor selects one of the segments in the segmented image 1102. For example, the doctor may select a pixel or a point in the segmented image 1102.
- the control system 300 may determine the segment of the segmented image 1102 that contains the selected pixel or point.
- the control system 300 may then determine the segment that contains the selected pixel or point as the selected segment.
- the selected pixel or point serves as the seed point 803.
- the seed point 803 may be positioned on a pixel corresponding to the parenchyma. Because the seed point 803 is positioned within the boundary for the parenchyma, the control system 300 may determine that the segment that shows or includes the parenchyma is the selected segment 416.
- the control system 300 may have generated the depth map 314 for the image 402 by analyzing the videos 308 and 310 from the stereo cameras 302 and 304. Additionally, the control system 300 may use the camera parameters 312 to generate the depth map 314. As a result, the depth map 314 indicates the depths of different structures within the image 402. Points on these structures with similar depths are shaded similarly in the depth map 314. Points on these structures with different depths are shaded differently in the depth map 314.
- the control system 300 also receives the model 412.
- the model 412 is a three-dimensional model of the kidney, which may include submodels of different structures of the kidney, such as the parenchyma. Because the model 412 may have been generated during a pre-operative scan, in which the endoscope was positioned or oriented differently than the endoscope used to generate the image 402, the model 412 may be positioned or oriented differently than the kidney shown in the image 402.
- the control system 300 may add the indicator 414 (which may be a point) to the model 412 to indicate to the doctor which segment of the segmented image 1102 should be selected.
- the indicator 414 is added to the submodel of the parenchyma in the model 412.
- the doctor views the indicator 414 to understand that the segment that includes or shows the parenchyma should be selected.
- the doctor may then view the image 402 or the segmented image 1102 and select a pixel on the parenchyma to select the segment that includes the parenchyma. This pixel may serve as the seed point 803.
- the control system 300 then moves or adjusts the model 412 to align the model 412 with the view shown in the image 402.
- the control system 300 may rotate the model 412 out-of-plane (e.g., out of the plane of the image showing the model 412) so that the model 412 is oriented according to the camera parameters 312 and the patient orientation 502.
- the control system 300 may also set the depth of the model 412 according to the depth indicated in the depth map 314.
- the control system 300 may apply the selected mask 1104 to the depth map 314, the image 402, and/or the segmented image 1102 to isolate the parenchyma in the depth map 314, the image 402, and/or the segmented image 1102.
- the control system 300 may intersect the image 402 or the segmented image 1102 with the depth map 314 to determine the depth of the parenchyma in the image 402.
- the control system 300 may then set the depth of the model 412 as the depth of the parenchyma.
- the control system 300 may also rotate the model 412 in-plane (e.g., in the plane of the image showing the model 412) so that the model 412 is aligned with the view shown in the image 402.
- the model 412 shows a view of the kidney that is aligned with the view of the kidney shown in the image 402 or the segmented image 1102.
- the model of the parenchyma in the model 412 is aligned with the parenchyma shown in the image 402 and the segment for the parenchyma shown in the segmented image 1102. If the endoscope that captured the image 402 is moved, the control system 300 may continue to adjust and move the model 412 so that the model 412 remains aligned with the view shown in the image 402.
- the surgical system 100 or 200 automatically aligns the model 412 of anatomical objects with a view of those anatomical objects in a video 308 or image stream, which may also be referred to as registering the model 412.
- the surgical system 100 or 200 receives an image 402 of multiple anatomical objects and segments (e.g., using a computer vision process) the image 402 to distinguish the various structures that appear in the image 402, such as the anatomical objects.
- the surgical system 100 or 200 may then request the doctor to identify the anatomical object in the segmented image that serves as the primary label for the anatomical objects.
- the doctor may select the anatomical object shown in the segmented image by selecting or indicating a point on the segmented image, which may be referred to as the seed point 803.
- the surgical system 100 or 200 then automatically aligns the model 412 of the anatomical objects with the anatomical objects shown in the video 308 or image stream. For example, the surgical system 100 or 200 may rotate the model 412 out-of-plane so that the orientation of the model 412 is consistent with the orientation of the patient.
- the surgical system 100 or 200 may set the depth of the model 412 to be the same as the depth of the anatomical object shown in the video 308 or image stream.
- the surgical system 100 or 200 may rotate the model 412 inplane so that the anatomical object shown in the model 412 aligns with the anatomical object shown in the video 308 or image stream. After aligning the model 412, the surgical system 100 or 200 may detect changes in the view provided in the video 308 or image stream and re-align the model 412 with the changed view. In this manner, the surgical system 100 or 200 keeps the model 412 registered with the video 308, which provides more useful information to the doctor during an operation.
- spatially relative terms such as “beneath”, “below”, “lower”, “above”, “upper”, “proximal”, “distal”, and the like may be used to describe one element’s or feature’s relationship to another element or feature as illustrated in the figures.
- These spatially relative terms are intended to encompass different positions (i.e., locations) and orientations (i.e., rotational placements) of the elements or their operation in addition to the position and orientation shown in the figures. For example, if the content of one of the figures is turned over, elements described as “below” or “beneath” other elements or features would then be “above” or “over” the other elements or features.
- the exemplary term “below” can encompass both positions and orientations of above and below.
- a device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
- descriptions of movement along and around various axes include various special element positions and orientations.
- the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise.
- the terms “comprises”, “comprising”, “includes”, and the like specify the presence of stated features, steps, operations, elements, and/or components but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups.
- Components described as coupled may be electrically or mechanically directly coupled, or they may be indirectly coupled via one or more intermediate components.
- the term “position” refers to the location of an element or a portion of an element in a three-dimensional space (e.g., three degrees of translational freedom along Cartesian x-, y-, and z-coordinates).
- the term “orientation” refers to the rotational placement of an element or a portion of an element (three degrees of rotational freedom - e.g., roll, pitch, and yaw).
- shape refers to a set positions or orientations measured along an element.
- proximal refers to a direction toward the base of the computer-assisted device along its kinematic chain and “distal” refers to a direction away from the base along the kinematic chain.
- distal refers to a direction away from the base along the kinematic chain.
- the instruments, systems, and methods described herein may be used in other contexts.
- the instruments, systems, and methods described herein may be used for humans, animals, portions of human or animal anatomy, industrial systems, general robotic, or teleoperational systems.
- the instruments, systems, and methods described herein may be used for non-medical purposes including industrial uses, general robotic uses, sensing or manipulating non-tissue work pieces, cosmetic improvements, imaging of human or animal anatomy, gathering data from human or animal anatomy, setting up or taking down systems, training medical or non-medical personnel, and/or the like.
- Additional example applications include use for procedures on tissue removed from human or animal anatomies (with or without return to a human or animal anatomy) and for procedures on human or animal cadavers. Further, these techniques can also be used for medical treatment or diagnosis procedures that include, or do not include, surgical aspects.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Endoscopes (AREA)
Abstract
La présente invention concerne un système informatique et un procédé pour régler des modèles d'objets anatomiques pendant une opération. Le système informatique comprend une mémoire et un processeur couplé en communication à la mémoire. Le processeur reçoit une première image d'une pluralité d'objets anatomiques et segmente la première image en une pluralité de segments. Un premier segment de la pluralité de segments représente un objet anatomique parmi la pluralité d'objets anatomiques. Le processeur reçoit également une sélection d'utilisateur du premier segment et reçoit un modèle qui comprend un sous-modèle de l'objet anatomique représenté dans le premier segment. Le processeur déplace en outre le modèle jusqu'à ce que le sous-modèle de l'objet anatomique s'aligne sur l'objet anatomique représenté dans le premier segment.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363539518P | 2023-09-20 | 2023-09-20 | |
| US63/539,518 | 2023-09-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025064440A1 true WO2025064440A1 (fr) | 2025-03-27 |
Family
ID=92966456
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/047116 Pending WO2025064440A1 (fr) | 2023-09-20 | 2024-09-17 | Enregistrement et suivi assistés par ordinateur de modèles d'objets anatomiques |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025064440A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010081094A2 (fr) * | 2009-01-09 | 2010-07-15 | The Johns Hopkins University | Système de calage et de superposition d'information sur des surfaces déformables à partir de données vidéo |
| CN107667380A (zh) * | 2015-06-05 | 2018-02-06 | 西门子公司 | 用于内窥镜和腹腔镜导航的同时场景解析和模型融合的方法和系统 |
| US10055848B2 (en) * | 2013-01-29 | 2018-08-21 | Brainlab Ag | Three-dimensional image segmentation based on a two-dimensional image information |
| WO2022146911A1 (fr) * | 2021-01-04 | 2022-07-07 | Intuitive Surgical Operations, Inc. | Classement préalable basé sur une image en vue d'un alignement et systèmes et procédés associés |
| WO2022180624A1 (fr) * | 2021-02-23 | 2022-09-01 | Mazor Robotics Ltd. | Tomodensitométrie pour l'enregistrement de radioscopie à l'aide d'entrées segmentées |
-
2024
- 2024-09-17 WO PCT/US2024/047116 patent/WO2025064440A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010081094A2 (fr) * | 2009-01-09 | 2010-07-15 | The Johns Hopkins University | Système de calage et de superposition d'information sur des surfaces déformables à partir de données vidéo |
| US10055848B2 (en) * | 2013-01-29 | 2018-08-21 | Brainlab Ag | Three-dimensional image segmentation based on a two-dimensional image information |
| CN107667380A (zh) * | 2015-06-05 | 2018-02-06 | 西门子公司 | 用于内窥镜和腹腔镜导航的同时场景解析和模型融合的方法和系统 |
| WO2022146911A1 (fr) * | 2021-01-04 | 2022-07-07 | Intuitive Surgical Operations, Inc. | Classement préalable basé sur une image en vue d'un alignement et systèmes et procédés associés |
| WO2022180624A1 (fr) * | 2021-02-23 | 2022-09-01 | Mazor Robotics Ltd. | Tomodensitométrie pour l'enregistrement de radioscopie à l'aide d'entrées segmentées |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11766308B2 (en) | Systems and methods for presenting augmented reality in a display of a teleoperational system | |
| CN109069217B (zh) | 图像引导外科手术中的姿势估计以及透视成像系统的校准的系统和方法 | |
| CN113729977B (zh) | 用于在图像引导手术中使用配准荧光透视图像的系统和方法 | |
| KR101720047B1 (ko) | 최소 침습 수술을 위한 가상 측정 도구 | |
| KR20200139197A (ko) | 기구의 추정된 위치를 디스플레이하기 위한 시스템 및 방법 | |
| CN110944595A (zh) | 用于将内窥镜图像投影到三维体积的系统和方法 | |
| CN104739519A (zh) | 一种基于增强现实的力反馈手术机器人控制系统 | |
| CN204542390U (zh) | 一种基于增强现实的力反馈手术机器人控制系统 | |
| Fu et al. | Augmented reality and human–robot collaboration framework for percutaneous nephrolithotomy: System design, implementation, and performance metrics | |
| US12011236B2 (en) | Systems and methods for rendering alerts in a display of a teleoperational system | |
| CN119816252A (zh) | 手术机器人系统和用于不同成像模态的术中融合的方法 | |
| WO2025064440A1 (fr) | Enregistrement et suivi assistés par ordinateur de modèles d'objets anatomiques | |
| US20240390076A1 (en) | Navigation assistance for an instrument | |
| EP4637501A1 (fr) | Segmentation lobulaire du poumon et mesure de la distance nodule à limite de lobe | |
| Abdurahiman et al. | Interfacing mechanism for actuated maneuvering of articulated laparoscopes using head motion | |
| US20250235287A1 (en) | Computer-assisted distance measurement in a surgical space | |
| US11850004B2 (en) | Systems and methods for determining an arrangement of explanted tissue and for displaying tissue information | |
| US20240070875A1 (en) | Systems and methods for tracking objects crossing body wallfor operations associated with a computer-assisted system | |
| WO2025101531A1 (fr) | Pré-rendu graphique de contenu virtuel pour systèmes chirurgicaux | |
| CN117651533A (zh) | 多分叉通道中的管状器件导航方法、设备及存储介质 | |
| Bauemschmitt et al. | Improved preoperative planning in robotic heart surgery | |
| WO2025212319A1 (fr) | Annotation assistée par ordinateur de structures de sous-surface sur une surface | |
| Yang et al. | Design and development of an augmented reality robotic system for large tumor ablation | |
| WO2025230837A1 (fr) | Estimation assistée par ordinateur de réseau de recouvrement cible | |
| WO2025101529A1 (fr) | Intervention chirurgicale assistée par ordinateur sur des objets anatomiques à l'aide d'objets virtuels |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24783469 Country of ref document: EP Kind code of ref document: A1 |