WO2025039086A1 - Procédé et système de suivi d'un os en chirurgie assistée par ordinateur - Google Patents
Procédé et système de suivi d'un os en chirurgie assistée par ordinateur Download PDFInfo
- Publication number
- WO2025039086A1 WO2025039086A1 PCT/CA2024/051094 CA2024051094W WO2025039086A1 WO 2025039086 A1 WO2025039086 A1 WO 2025039086A1 CA 2024051094 W CA2024051094 W CA 2024051094W WO 2025039086 A1 WO2025039086 A1 WO 2025039086A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bone
- tracking
- image capture
- imaging
- femur
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2048—Tracking techniques using an accelerometer or inertia sensor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2059—Mechanical position encoders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
- A61B2034/256—User interfaces for surgical systems having a database of accessory information, e.g. including context sensitive help or scientific articles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3937—Visible markers
- A61B2090/3945—Active visible markers, e.g. light emitting diodes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3983—Reference marker arrangements for use with image guided surgery
Definitions
- the present disclosure relates to computer-assisted surgery, such as computer-assisted surgery systems used in orthopedic surgery to track bones and tools.
- Computer-assisted surgery commonly employs tracker systems to provide an operator with navigation data through a surgical procedure.
- the navigation data may take various forms, including position and orientation data pertaining to bones and tools, predicted alterations, imaging, etc.
- the computer-assisted surgery systems may also include robotic apparatuses to perform some steps of surgical procedures.
- a challenge faced during procedures including computer-assisted navigation is the tracking of bone surfaces with the tracking equipment. More particularly, computer- assisted procedure in the orthopedic field often require the acquisition of clouds of points on a bone surface, for the subsequent tracking of the bone surface. The acquisition may be lengthy, and may result in inaccuracies.
- a system for tracking a bone in computer-assisted surgery comprising: a controller including a processing unit a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: digitizing and tracking a coordinate system including at least one bone landmark of a bone, using a tracker on the bone; imaging the bone with the tracker; processing the imaging of the bone with the tracker to generate an image of a portion of the bone; adding the image of the portion of the bone generated by the image processing to the coordinate system; and outputting navigation data of the bone including the bone landmark and the image from the tracking of the coordinate system.
- imaging the bone with the tracker includes imaging the bone with an optical tracker.
- imaging the bone with the tracker includes imaging the bone using an image capture device providing a three-dimensional model of the bone.
- imaging the bone with the tracker includes imaging the bone using an image capture device providing a three-dimensional model of tissues associated with the bone using spectroscopy.
- the system includes a robot having a robot arm, the robot being operated by a robot controller of the controller.
- the system includes one or more image capture devices in communication with the controller, the one or more image capture devices being used by the controller for: the imaging of the bone with the tracker; and/or the digitizing and tracking of the coordinate system.
- the one or more image capture devices are used by the controller to perform a virtual reality, augmented reality and/or mixed reality guidance.
- a system for tracking a bone in computer-assisted surgery comprising: a controller including a processing unit, and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: digitizing and tracking a coordinate system including at least one bone landmark of a bone, using at least one image capture device having depth imaging capacity to generate three-dimensional models; imaging the bone with the at least one image capture device; processing the imaging of the bone to add an image of a portion of the bone to the coordinate system; and outputting navigation data of the bone including the bone landmark and the image from the tracking of the coordinate system with the at least one image capture device.
- system as defined above and described herein may further include any one or more of the following features, in whole or in part, and in any combination.
- the system includes a robot in communication with the controller and having a robotic tool, wherein the controller outputs navigation data of the robotic tool from the tracking of the coordinate system with the at least one image capture device.
- a robot is in communication with the controller and having a robotic tool having the least one image capture device secured, wherein the controller outputs navigation data of the robotic tool.
- the computer-readable program instructions executable by the processing unit for imaging the bone with the at least one image capture device further include using spectroscopic imaging from the at least one image capture device to image the bone with cartilage.
- the computer-readable program instructions executable by the processing unit for tracking of the coordinate system with the at least one image capture device further include capturing images of the bone through soft tissue with the at least one image capture device to tracking the bone.
- the tracking of the coordinate system is done without a tracker pinned to the bone.
- the one or more image capture devices are mounted in a head-mounted display adapted to be worn by a user of the system.
- the head-mounted display is in communication with the controller, and the computer-readable program instructions executable by the processing unit provide the head-mounted display with a virtual reality, augmented reality and/or mixed reality guidance integrating the navigation data of the bone.
- a system for tracking a femur in computer-assisted surgery comprising: a controller including a processing unit and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: digitizing and tracking a coordinate system including at least one axis of a femur; obtaining a model of an articular surface of the femur; adding the model of the articular surface of the femur to the coordinate system; generating at least one plane for the articular surface of the femur using the model; and outputting navigation data of the femur including the at least one plane from the tracking of the coordinate system.
- system as defined above and described herein may further include any one or more of the following features, in whole or in part, and in any combination.
- generating at least one plane for the articular surface of the femur includes identifying distalmost points of a lateral condyle and of a medial condyle of the femur, the distalmost points lying in the plane.
- generating at least one plane for the articular surface of the femur includes orienting the plane relative to the at least one axis of the femur.
- generating at least one plane for the articular surface of the femur includes identifying posterior-most points of a lateral condyle and of a medial condyle of the femur, the posterior-most points lying in the plane.
- generating at least one plane for the articular surface of the femur includes orienting the plane relative to the at least one axis of the femur, the plane including the posterior-most points.
- a system for tracking a bone in computer-assisted surgery comprising: a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: digitizing and tracking a coordinate system including at least one bone landmark of a bone, using a tracker on the bone; imaging the bone with the tracker; processing the imaging of the bone with the tracker to add an image of a portion of the bone to the coordinate system; and outputting navigation data of the bone including the bone landmark and the image from the tracking of the coordinate system.
- a system for tracking a femur in computer- assisted surgery comprising: a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer- readable program instructions executable by the processing unit for: digitizing and tracking a coordinate system including at least one axis of a femur; obtaining a model of an articular surface of the femur; adding the model of the articular surface of the femur to the coordinate system; generating at least one plane for the articular surface of the femur using the model; and outputting navigation data of the femur including the at least one plane from the tracking of the coordinate system.
- a system for tracking a bone in computer- assisted surgery comprising: a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer- readable program instructions executable by the processing unit for: digitizing and tracking a coordinate system including at least one bone landmark of a bone, using at least one image capture device having depth imaging capacity to generate three- dimensional models; imaging the bone with the at least one image capture device; processing the imaging of the bone to add an image of a portion of the bone to the coordinate system; and outputting navigation data of the bone including the bone landmark and the image from the tracking of the coordinate system with the at least one image capture device.
- FIG. 1 is a schematic view of a computer-assisted surgery system fortracking a bone in accordance with the present disclosure
- FIG. 2 is a block diagram of the computer-assisted surgery system of Fig. 1 ;
- FIG. 3 is a schematic perspective view of an exemplary bone with tracker as used with the computer-assisted surgery system of Fig. 1 ;
- Fig. 4 is a photograph taken as per of a procedure involving the computer- assisted surgery system of Fig. 1 ;
- Fig. 5 is a schematic elevation view of the exemplary bone with tracker of Fig. 3 when defining reference planes with the computer-assisted surgery system of Fig. 1 ;
- FIG. 6 is a schematic view of processing performed to a bone models in a bone atlas or library to create a reference table of bone models for intraoperative use;
- Fig. 7 is an example of a reference envelope step that may be performed to facilitate a comparison of imaged bones with bone models in a bone atlas or library;
- Fig. 8 shows a comparison between top models of the reference table with a probabilistic region of an imaged bone
- Fig. 9 is a representation of a resulting intraoperative bone model.
- a computer-assisted surgery (CAS) system for tracking bones is generally shown at 10, and is used to provide surgery assistance to an operator.
- the system 10 or surgery assistance (SA) system 10.
- SA surgery assistance
- the system 10 is shown relative to a dummy patient in prone decubitus, but only as an example.
- the system 10 could be used for any body part, including non-exhaustively knee joint, hip joint, spine, and shoulder bones, for orthopedic surgery, but could also be used in other types of surgery.
- the system 10 could be used for surgery of all sorts, such as brain surgery, and soft tissue surgery.
- the system 10 may be robotized in a variant, but the robotized aspect is optional.
- the system 10 is shown in Figs. 1 and 2 with a robot 20.
- the system 10 has, may have or may be used with a robot 20, optical trackers 30, a tracker device 40, a CAS controller 50 (also known as a super controller 50), a tracking module 60, a robot controller 70 (also known as a robot driver), an image capture device 80, or any combination thereof.
- the robot 20, shown by its robot arm 20A may optionally be present as the working end of the system 10, and may be used to perform or guide bone alterations as planned by an operator and/or the CAS controller 50 and as controlled by the CAS controller 50.
- the robot arm 20A may also be configured for collaborative/cooperative mode in which the operator may manipulate the robot arm 20, or the tool supported by the robot arm 20, though the tool may be operated by a human operator.
- the tooling end also known as end effector, may be manipulated by the operator while supported by the robot arm 20A.
- the robot 20 may be the coordinate measuring machine (CMM) of the robotic surgery system 10.
- CCMM coordinate measuring machine
- the optical trackers 30 are positioned on the robot 20, on patient tissue (e.g., bones B), and/or on the tool(s) T and surgical instruments, and provide tracking data for the robot 20, the patient and/or tools.
- the tracking device 40 also known as a sensor device, apparatus, etc. performs optical tracking of the optical trackers 30, so as to enable the tracking in space (a.k.a., navigation) of the robot 20, the patient and/or tools.
- the CAS controller 50 also known as the super controller, includes the processor(s) and appropriate hardware and software to run a computer-assisted surgery procedure in accordance with one or more workflows.
- the CAS controller 50 may include or operate the tracking device 40, the tracking module 60, and/or the robot controller 70 (if robot 20 is present). As described hereinafter, the CAS controller 50 may also drive the robot arm 20A through a planned surgical procedure.
- the tracking module 60 is tasked with determining the position and/or orientation of the various relevant objects during the surgery procedure, such as the end effector of the robot arm 20, bone(s) B and tool(s) T, using data acquired by the tracking device 40 and by the robot 20, and/or obtained from the robot controller 70.
- the position and/or orientation may be used by the CAS controller 50 to control the robot arm 20A.
- the robot controller 70 is tasked with powering or controlling the various joints of the robot arm 20A, based on operator demands or on surgery planning, and may also be referred to as a robot controller module that is part of the super controller 50.
- the robot controller 70 may also optionally calculate robot movements of the robot arm 20A, so as to control movements of the robot arm 20A autonomously in some instances, i.e., without intervention from the CAS controller 50.
- An image capture device(s) 80 may also be present, for instance as a complementary registration tool.
- the image capture device(s) 80 may for instance be self-standing, may be mounted on the robot 20A, such as on the robot arm, such that the point of view of the camera is known in the frame of reference, also known as the coordinate system.
- the image capture device(s) 80 may alternately be part of a headmounted display (e.g., headcam), among other possibilities, and thus may be used as part of a virtual reality, augmented reality and/or mixed reality system configuration of the present system 10, wherein both real time video/images of the bone or patient tissue may be combined, added, merged or otherwise integrated with the virtual model(s) and digitized coordinate system, and mixed or augmented reality guidance is provided by the system 10.
- the head-mounted display may provide mixed or augmented reality guidance to the user, with or without integration with the robot.
- the SA system 10 also includes an artificial intelligence (Al) system 90 that is in communication with, and/or forms part of, the CAS controller 50.
- the Al system 90 includes a machine-learning module 90A and an assistance module 90B.
- the machine-learning module 90A is the learning module of the SA system 10 that receives data from surgical procedures to parametrize I train a machine-learning algorithm (MLA), which may be stored in the non-transitory memory of the CAS controller 50, or in any other suitable non-transitory memory such as remote memory on the cloud, for example.
- MSA machine-learning algorithm
- the assistance module 90B is the intervening module of the SA system 10 that provides machine-learned assistance during a surgical procedure based on the parametrized / trained machine-learning algorithm (MLA).
- MSA parametrized / trained machine-learning algorithm
- interfaces l/F such as displays with or without graphic user interface, screens, hand-held devices, computer station, servers, and like etc.
- the robot 20 is illustrated, though it is an optional part of the system 10.
- the robot 20 may have the robot arm 20A stand from a base 20B, for instance in a fixed relation relative to the operating-room (OR) table supporting the patient, whether it is attached or detached from the table.
- the robot arm 20A has a plurality of joints 21 and links 22, of any appropriate form, to support an end effector 23 that may interface with the patient, or may be used during surgery without interfacing with the patient.
- the end effector or tool head may optionally incorporate a force/torque sensor for collaborative/cooperative control mode, in which an operator manipulates the robot arm 20A.
- the robot arm 20A is shown being a serial mechanism, arranged for the tool head 23 to be displaceable in a desired number of degrees of freedom (DOF).
- the tool head 23 may for example be a support that is not actuated, the support being used to support a tool, with the robot arm 20A used to position the tool relative to the patient.
- the robot arm 20A controls 6-DOF movements of the tool head, i.e., X, Y, Z in the coordinate system, as well as pitch, roll and yaw. Fewer or additional DOFs may be present.
- X, Y, Z i.e., X, Y, Z in the coordinate system, as well as pitch, roll and yaw. Fewer or additional DOFs may be present.
- only a fragmented illustration of the joints 21 and links 22 is provided, but more joints 21 of different types may be present to move the end effector 23 in the manner described above.
- the joints 21 are powered for the robot arm 20A to move as controlled by the CAS controller 50 in the six DOFs, and in such a way that the position and orientation of the end effector 23 in the coordinate system may be known, for instance by readings from encoders on the various joints 21. Therefore, the powering of the joints is such that the end effector 23 of the robot arm 20A may execute precise movements, such as moving along a single direction in one translation DOF, or being restricted to moving along a plane, among possibilities.
- Such robot arms 20A are known, for instance as described in United States Patent Application Publication No. US 2007/0156157, which is incorporated herein by reference.
- the end effector 23 of robot arm 20A may be defined by a chuck or like tool interface, typically actuatable in rotation.
- numerous tools may be used as end effector for the robot arm 20, such tools including a registration pointer as shown in Fig. 1 , equipped with a tracker device 30, a reamer (e.g., cylindrical, tapered), a reciprocating saw, a retractor, a camera, an ultrasound unit, a laser rangefinder or light-emitting device (e.g., the indicator device of US Patent No. 8,882,777, incorporated herein by reference), a laminar spreader, an instrument holder, or a cutting guide, depending on the nature of the surgery.
- the various tools may be part of a multi-mandible configuration or may be interchangeable, whether with human assistance, or as an automated process.
- the installation of a tool in the tool head may then require some calibration in order to track the installed tool in the X, Y, Z coordinate system of the robot arm 20.
- the end effector 23 of the robot arm 20A may be positioned by the robot 20 relative to the surgical area in a desired orientation according to a surgical plan, such as a plan based on preoperative imaging. Due to the proximity between the robot 20 and the surgical area, the robot 20 may be covered partially with a surgical drape D, also known as a surgical robotic drape.
- the surgical drape D is a sterile panel (or panels), tubes, bags or the like that form(s) a physical barrier between the sterile zone (e.g., surgical area) and some equipment that may not fully comply with sterilization standards, such as the robot 20.
- the surgical drape D is transparent such that one can see through the drape D.
- the robot is entirely covered with the surgical drape D, and this includes the base 20B, but with the exception of the end effector 23. Indeed, as the end effector 23 interacts or may interact with the human body, it may be sterilized and may not need to be covered by the surgical drape D, to access the patient. Some part of the robot 20 may also be on the sterile side of the surgical drape D. In a variant, a portion of the robot arm 20 is covered by the surgical drape D.
- the surgical drape D may be in accordance with United States Patent Application No. 15/803,247 filed on November 3, 2017, which is incorporated herein by reference.
- the CAS controller 50 can manipulate the robot arm 20A automatically (without human intervention), or by a surgeon manually operating the robot arm 20A (e.g. physically manipulating, via a remote controller through the interface l/F) to move the end effector 23 of the robot arm 20A to the desired location, e.g., a location called for by a surgical plan to align an instrument relative to the anatomy.
- a step of a surgical procedure can be performed, such as by using the end effector 23.
- a tracker device 30 may optionally be secured to the distalmost link, and may be distinct from the tracker device 30 on the instrument supported by the end effector 23.
- the robot arm 20A may include sensors 25 in its various joints 21 and links 22.
- the sensors 25 may be of any appropriate type, such as rotary encoders, optical sensors, position switches that are a non-exhaustive list of potential sensors, for the position and orientation of the end effector, and of the tool in the end effector 23 to be known.
- the tracking module 60 may determine the position and orientation of the robot 20 in a frame of reference of the robot 20, such as by obtaining the position (x,y,z) and orientation (phi, theta, ro) of the end effector 23 from the CAS controller 50 using the sensors 25 in the robot arm 20A, i.e., robot coordinates may be an integrated function of the robot 20 in that it may determine the position and orientation of its end effector 23 with respect to its coordinate system.
- the robot 20 may be the coordinate measuring machine (CMM) of the robotic surgery system 10, with a frame of reference (e.g., coordinate system, referential system) of the procedure being relative to the fixed position of the base 20B of the robot 20.
- the sensors 25 must provide the precision and accuracy appropriate for surgical procedures.
- the coupling of tools to the robot arm 20A may automatically cause a registration of the position and orientation of the tools in the frame of reference of the robot 20, though steps of calibration could be performed, as explained below.
- an exemplary tracker 30 is shown secured to the instrument T, and may also or alternatively be on the robot arm 20 and/or on the bones B (Fig. 3).
- the trackers 30 may be known as trackable elements, markers, navigation markers, active sensors (e.g., wired or wireless) that may for example include infrared emitters.
- the trackers 30 are passive retro-reflective elements, that reflect light.
- the trackers 30 have a known geometry so as to be recognizably through detection by the tracker device 40.
- the trackers 30 may be retro-reflective tokens, patches, spheres, etc.
- the tracker 30 of Figs. 1 and 3 may be as described in U.S. Patent No. 8,386,022, which is incorporated herein by reference, and may thus be known as a multifaceted tracker.
- the tracker device 40 is shown as being embodied by an image capture device, capable of illuminating its environment.
- the tracker device 40 may have two (or more) points of view, each defined by a camera for example, such that triangulation can be used to determine the position of the tracker devices 30 in space, i.e., the coordinate system of the robotic surgery system 10.
- the tracker device 40 may emit light, or use ambient light, to observe the trackers 30 from its points of view, so as to determine a position of the trackers 30 relative to itself.
- the tracker device 40 can produce navigation data enabling the locating of objects within the coordinate system of the robotic surgery system 10.
- the tracker device 40 is of the type known as the Polaris products by Northern Digital Inc.
- the tracker device 40 may form the complementary part of the CMM function of the robotic surgery system 10, with the trackers 30 on the robot base 20A for example.
- the CAS controller 50 is shown in greater detail relative to the other components of the robotic surgery system 10.
- the CAS controller 50 has a processor unit 51 and a non-transitory computer-readable memory 52 communicatively coupled to the processing unit 51 and configured for executing computer-readable program instructions executable by the processing unit 51 to perform functions, such as tracking the patient tissue and tools, using the position and orientation data from the robot 20 and the readings from the tracker device 40.
- the computer-readable program instructions may include an operating system that may be viewed by a user or operator as a graphical user interface (GUI) on one or more of the interfaces of the robotic surgery system 10. It is via this or these interfaces that the user or operator may interface with the SA system 10, be guided by a surgical workflow, obtain navigation data, etc.
- GUI graphical user interface
- the CAS controller 50 may also control the movement of the robot arm 20A, if present, via the robot controller module 70.
- the system 10 may comprise various types of interfaces l/F, for the information to be provided to the operator.
- the interfaces l/F may include and/or screens including wireless portable devices (e.g., phones, tablets), audio guidance, LED displays, head-mounted display for virtual reality, augmented reality, mixed reality, among many other possibilities.
- the interface l/F comprises a graphic-user interface (GUI) operated by the system 10.
- GUI graphic-user interface
- the system 10 may be used in the context of a CAS system that does not include a robot or robotic arm, while still being capable of being integrated and used with head-mounted display, including or being in communication with the image capture device(s), and thus used as part of a virtual reality, augmented reality and/or mixed reality system configuration.
- the CAS controller 50 may also display images captured pre-operatively, or using one or more image capture device(s) 80 (associated with the procedure (e.g., 3D camera, laparoscopic cameras, tool mounted cameras), for instance to be used in the collaborative/cooperative control mode of the system 10, or for visual supervision by the operator of the system 10, with augmented reality for example.
- the procedure e.g., 3D camera, laparoscopic cameras, tool mounted cameras
- the CAS controller 50 may drive the robot arm 20A, in performing the surgical procedure based on the surgery planning achieved pre-operatively, or in maintaining a given position and orientation to support a tool.
- the CAS controller 50 may run various modules, in the form of algorithms, code, non-transient executable instructions, etc., in order to operate the robotic surgery system 10 in the manner described herein.
- the CAS controller 50 may be part of any suitable processor unit, such as a personal computer or computers including laptops and desktops, tablets, server, etc.
- the tracking module 60 may be a subpart of the CAS controller 50, or an independent module or system.
- the tracking module 60 receives the position and orientation data from the robot 20 and the readings from the tracker device 40.
- the tracking module 60 may hence determine the positions of the objects relative to the robot arm 20A in a manner described below.
- the tracking module 60 may also be provided with virtual (i.e., digital) models of the objects to be tracked.
- the tracking module 60 may track bones and tools, and hence may use virtual bone models and tool models.
- the bone models may be acquired from pre-operative imaging (e.g., MRI, CT-scans), for example in 3D or in multiple 2D views, including with 2D X-ray to 3D bone model technologies.
- the virtual bone models may also include some image processing done preoperatively, for example to remove soft tissue or refine the surfaces that will be exposed and tracked.
- the virtual bone models may be of greater resolution at the parts of the bone that will be tracked during surgery, such as the knee articulation in knee surgery. This is described in further detail below.
- the bone models may also carry additional orientation data, such as various axes (e.g., longitudinal axis, mechanical axis, etc.).
- the virtual bone models may therefore be patient specific.
- the virtual bone models may also be obtained from a bone model library, with the data obtained from the video images used to match a generated 3D surface of the bone with a bone from the bone atlas.
- the bone models are limited to a 3D surface of part of a bone, such as an articular operated on.
- Such simplified bone model which may help with computational efficiency of the system, could also include orientation data, such as a center of a femoral head.
- orientation data such as a center of a femoral head.
- center of the femoral head may be obtained by tracking movements of the bone (i.e., femur) relative to the pelvis, such that the movement allow a determination of the center of rotation.
- the bone model can have patient-specific data that is limited to a 3D surface of only a portion of the bone (e.g., a part of the bone that is expected to be exposed during the surgery), and to a center of rotation for example.
- the system described herein may then generate a virtual model of the bone from this patient-specific data, such as by using as a reference a bone model from an atlas, as explained below.
- the virtual tool models may be provided by the tool manufacturer, or may also be generated in any appropriate way so as to be a virtual 3D representation of the tool(s).
- Additional data may also be available, such as tool orientation (e.g., axis data and geometry).
- tool orientation e.g., axis data and geometry
- the tracking module 60 may obtain additional information, such as the axes related to bones or tools.
- the CAS controller 50 may have the robot controller 70 integrated therein.
- the robot controller 70 may be physically separated from the CAS controller 50, for instance by being integrated into the robot 20 (e.g., in the robot base 20B).
- the robot controller 70 is tasked with powering and/or controlling the various joints of the robot arm 20A.
- the robot controller 70 may also optionally calculate robot movements of the robot arm 20A, so as to control movements of the robot arm 20A autonomously in some instances, i.e., without intervention from the CAS controller 50.
- the robot controller 70 is capable of performing actions based on a surgery planning.
- the surgery planning may be a module programmed specifically for any given patient, according to the parameters of surgery desired by an operator such as an engineer and/or surgeon.
- the parameters may include geometry of selected, planned bone cuts, planned cut depths, sequence or workflow of alterations with a sequence of surgical steps and tools, tools used, etc.
- the image capture device 80 may produce structured light illumination for tracking objects with structured light 3D imaging.
- structured light illumination a portion of the objects is illuminated with one or multiple patterns from a pattern projector or like light source that may or may not be part of the image capture device 80.
- Structured light 3D imaging is based on the fact that a projection of a line of light from the pattern projector onto a 3D shaped surface produces a line of illumination that appears distorted as viewed from perspectives other than that of the pattern projector. Accordingly, imaging such a distorted line of illumination allows a geometric reconstruction of the 3D shaped surface.
- the structured light grid pattern can be produced by incoherent light projection, e.g., using a digital video projector, wherein the patterns are typically generated by propagating light through a digital light modulator.
- digital light projection technologies include transmissive liquid crystal, reflective liquid crystal on silicon (LCOS) and digital light processing (DLP) modulators.
- the resolution of the structured light grid pattern can be limited by the size of the emitting pixels of the digital projector.
- patterns generated by such digital display projectors may have small discontinuities due to the pixel boundaries in the projector. However, these discontinuities are generally sufficiently small that they are insignificant in the presence of a slight defocus.
- the structured light grid pattern can be produced by laser interference. For instance, in such embodiments, two or more laser beams can be interfered with one another to produce the structured light grid pattern wherein different pattern sizes can be obtained by changing the relative angle between the laser beams.
- the pattern projector may emit light that is inside or outside the visible region of the electromagnetic spectrum.
- the emitted light can be in the ultraviolet region and/or the infrared region of the electromagnetic spectrum such as to be imperceptible to the eyes of the medical personnel.
- the medical personnel may be required to wear protective glasses to protect their eyes from such invisible radiations.
- the image capture device 80 may also operate with laser rangefinder technology or triangulation, as a few examples among others.
- the image capture device 80 may consequently include the lens(es) and related equipment (e.g., CCD) to acquire backscatter images of the illuminated portion of objects.
- the lens(es) (a.k.a., camera(s)) capture the pattern projected onto the portions of the object.
- the lens(es) is(are) adapted to detect radiations in a region of the electromagnetic spectrum that corresponds to that of the patterns generated by the light projector.
- the known light pattern characteristics and known orientation of the pattern projector relative to the lens(es) are used by the tracking module 60 to generate a 3D geometry of the illuminated portions, using the backscatter images captured by the camera(s).
- a single camera spaced form the pattern projector can be used, using more than one camera may increase the field of view and increase surface coverage, or precision via triangulation.
- the image capture device(s) 80 may also have one or more filters integrated therein to filter out predetermined regions or spectral bands of the electromagnetic spectrum.
- the filter can be removably or fixedly mounted in front of any given lens.
- multiple filters may be periodically positioned in front of a given camera in order to acquire spectrally resolved images with different spectral ranges at different moments in time, thereby providing time dependent spectral multiplexing.
- Such an embodiment may be achieved, for example, by positioning the multiple filters in a filter wheel that is controllably rotated to bring each filter in the filter wheel into the optical path of the given one of the camera in a sequential manner.
- the filter can allow transmittance of only some predetermined spectral features of objects within the field of view, captured either simultaneously or separately by the tracking device 40 and by the image capture device(s) 80, so as to serve as additional features that can be extracted to improve accuracy and speed of registration.
- the filter can be used to provide a maximum contrast between different materials which can improve the imaging process and more specifically the soft tissue identification process.
- the filter can be used to filter out bands that are common to backscattered radiation from typical soft tissue items, the surgical structure of interest, and the surgical tool(s) such that backscattered radiation of high contrast between soft tissue items, surgical structure and surgical tools can be acquired.
- the filter can includes band pass filters configured to let pass only some spectral bands of interest.
- the filter can be configured to let pass spectral bands associated with backscattering or reflection caused by the bones, the soft tissue while filtering out spectral bands associated with specifically colored items such as tools, gloves and the like within the surgical field of view.
- Other methods for achieving spectrally selective detection including employing spectrally narrow emitters, spectrally filtering a broadband emitter, and/or spectrally filtering a broadband imaging detector, can also be used.
- Another light source may also be provided on the image capture device(s) 80, for a secondary tracking option, as detailed below. It is considered to apply distinctive coatings on the parts to be tracked, such as the bone and the tool, to increase their contrast relative to the surrounding soft tissue.
- the image capture device(s) 80 may be a spectrometer, a spectrophotometer, a spectral camera, a spectroscopy device, etc. Depending on the wavelengths it operates in, the image capture device(s) 80 may be referred to as a multispectral camera, a hyperspectral camera. Therefore, the imaging of the image capture device(s) 80 having spectroscopy capability may allow the system 10 to image soft tissue, such as cartilage. The resolution of the imaging may notably enable the system to determine thickness of the cartilage. It could also provide sufficient data to isolate bone from environment (e.g., including cartilage) in the processing of the imaging, in a faster manner.
- the image capture device(s) 80 may include a 3D camera(s), to perform range imaging, and hence determine position data from the captured images during tracking.
- the expression 3D camera is used to describe the camera’s capability of providing range data for the objects in the image or like footage it captures, but the 3D camera may or may not produce 3D renderings of the objects it captures.
- range tracking does not seek specific illumination patterns in distance calculations, but relies instead on the images themselves and the 3D camera’s capacity to determine the distance of points of objects in the images.
- the 3D camera for ranging performs nonstructured light ranging, and the expression “ranging” is used herein to designate such non-structured light ranging.
- the image capture device(s) 80 may use a known visual pattern in a calibration performed in situ, at the start of the tracking, and optionally updated punctually or continuously throughout the tracking.
- the calibration is necessary to update the camera acquisition parameters due to possible lens distortion (e.g., radial, rotational distortion), and hence to rectify image distortion to ensure the range accuracy.
- tracking tokens with recognizable patterns may be used, with the patterns being used to determine a point of view (POV) of the image capture device(s) 80, via perspective deformation.
- the image capture device(s) 80 may be used in a stitching process.
- the image capture device(s) 80 may take numerous images that can be stitched together by matching overlapping portions of the images.
- the 3D imaging capacity may be used, for example for the overlapping portions to be in 3D and thus to be more precisely stitched to one another.
- the image capture device(s) 80 may take the images from different POVs with the generated 3D images by the image capture device(s) 80 or image processing module of the CAS controller 50 serving as reference for the stitching.
- the image capture device(s) 80 may thus have image processing capability through a module therein and/or the image processing module may be part of the CAS controller 50, such as by being a submodule part of the tracking module 60.
- live tracking of the bone(s) and environment (e.g., robot, tools) can be accomplished by tracking the surface of the bone directly and solely with the image capture device(s) in a variant.
- Such tracking of the bone with the image capture device(s) 80 may rely on the tracked surface of the bone as a reference for a coordinate system(s). If available, the spectroscopic capability may be used at the outset of the imaging, and optionally as a refresher during the surgical procedure.
- Such an arrangement with “live tracking” using the image capture device(s) 80 may enable the surgical procedure to occur without any tracker 30 on the bone.
- POV point of view
- the image capture device(s) 80 may therefore concurrently see the bone and the robot arm 20A and/or tool thereon - multiple ones of the image capture device 80 could be present to increase the POV.
- the image capture device 80 could also be on the robot arm 20A for this POV, or use coordinates from the robot controller 70 to have relative positioning data.
- a bone has a tracker 30 thereon.
- the bone is shown without soft tissue.
- the bone is a femur, but other bones may be tracked and thus may be in a fixed relation with respect to a tracker 30.
- the tracker 30 is shown as being an optical tracker, other types of trackers may be used, and this may include active trackers, inertial sensors, QR tokens, etc. As described above, the system may be without any tracker 30 in “live tracking”.
- the bone may be tracked, using the tracker 30, and the tracker device 40, or other tracking modality.
- a coordinate system is digitized and tracked, which coordinate system may include a bone landmark.
- the coordinate system may be tied to the bone.
- the bone landmark includes a mechanical axis of the femur, shown as X in Fig. 3, in the case of a femur. It may be required that the bone landmark, such as the mechanical axis, be digitized. Stated differently, points on the bone may have their position recorded in the coordinate system, i.e., they are recorded digitally/virtually.
- the bone landmark is recorded as the mechanical axis of the femur, by inducing movements of the femur relative to the pelvis.
- the pelvis may be fixed, or may be tracked for any displacement during such movements.
- the movements may be tracked via the tracker 30. It may be assumed that the tracker 30 in such movements is displaced along a surface of a virtual sphere, a center of which is the center of rotation of the hip joint, in the femoral head.
- the tracked movements of the tracker 30 fixed to the femur may therefore be used to identify a location of the center of the femoral head, in the coordinate system.
- an entry point of the mechanical axis at the distal femur may also be digitized.
- the femur will receive a femoral implant, and thus the distal femur is exposed through incisions intra-operatively.
- a device such as a registration pointer (Fig. 1) may be used to digitize, i.e., record, a position of the entry point of the mechanical axis relative to the coordinate system.
- the bone landmark is the mechanical axis X of the femur passing through the center of the femoral head and through the digitized entry point.
- the mechanical axis is trackable in the coordinate system using the tracker 30, or any other appropriate tracking modality.
- the entry point of the mechanical axis of the femur may also be obtained via image processing, as detailed below, from the video feed or like image from the image capture device 80.
- Landmarks of other bones may be digitized using different approaches, for other types of surgery.
- a mechanical axis of the tibia may be digitized using a midpoint of the malleoli and visible landmarks on the tibial plateau.
- a surface of the glenoid or landmarks of the scapula may be obtained.
- additional bone data may be obtained to add to the coordinate system.
- the physiology of the current articular surface(s) of the femur may be important in the completion of the surgical procedure.
- the virtual model of the bone and/or of the articular surface is then added (e.g., merged, combine, overlay or otherwise integrated) the virtual model of the articular surface to the coordinate system of the femur.
- the system 10 may obtain a model that results from pre-operative imaging.
- the model may be a three-dimensional model generated from pre-operative CT (computerized tomography) scans or from magnetic resonance imaging (MRI), with or without processing work.
- the model may be a three-dimensional model generated from pre-operative radiography. Image processing may or may not be performed in the generation of such models.
- additional data may be used in the generation of such models, including using a bone atlas or like library.
- the bone atlas or library may for example have joint images that have been reconstructed in three dimensions from magnetic resonance imagery (MRI).
- the assistance module 90B may generate/obtain a model. Using for example points acquired intraoperatively, a segmentation of the articular surface of the bone may be performed, for the assistance module 90B to generate/obtain a model, notably from a library of bone models.
- the bone atlas or library may rely on a plurality of bones from different ethnicities, genders, geographical locations, laterality, etc.
- the bone atlas or library may be based on MRI images (e.g., 3D models), though other imaging modalities may be used.
- the models may be processed for data extraction, notably by distinguish bone from soft tissue, such as cartilage.
- the data extraction may also include the identification of some landmarks. Accordingly, following data extraction, the bone may be segmented and may include personal identification associated with the bone segmentation.
- the segmentation of the bones is used to create reference envelopes for the various bone models in the atlas.
- the reference envelope may be performed by vertex mapping, by which given landmarks, such as shown at L1 in Fig. 7, are at the same bone location for each bone model of the atlas.
- vertex mapping all bone models may have the same resolution and the same density, because of the consistency of the vertex mapping from mesh to mesh.
- the high resolution meshes or like reference envelope formats of the bone models may be normalized.
- normalization is performed for the bone models to have a shared coordinate system and/or the same scaling.
- the bone models may be limited to a given portion of the bone, such as the articular surfaces being operated on.
- Fig. 6 is the creation of a bone atlas or library for distal femur alterations, such as the resection of the bone femur for subsequent implanting of a distal femur implant. As such, it may be desired to remove a portion of the femur at a level of the diaphysis.
- all bone models of the bone atlas or library may be cut at the same diaphysis level.
- the bone atlas or library, or equivalent database may have bone models having a uniform reference envelope, such as the vertex mesh pattern.
- the bone models may be scale invariant, and may have a shared coordinate system.
- the bone models may be grouped by distinctive features, to have a given number of topologies. For example, between 10-20 topologies may be isolated as being representative of thousands or tens of thousands of bone models in the bone atlas or library. More or fewer topologies may alternately be used.
- a probabilistic region may be identified.
- the probabilistic region may be the region where articular contact occurs, and thus may be of greater importance of the accuracy of the surgical procedure.
- the probabilistic region may also be defined as the region on the bone that should be accessible intra- operatively from the soft tissue incision, and thus visible by the image capture device(s) 80 and/or by a registration pointer if used. Defining the probabilistic region on all bone models in the database may be performed to simplify the registration process to only try to match the probabilistic region of bone models in the database (instead of the full bone) with the surface image and/or point-cloud acquired intra-operatively (the visible/accessible surface of the bone).
- the probabilistic region may be applied to all bone models in the database.
- a reference database may thus be generated, the reference database being known for example as a reference table, a look-up table, etc.
- the reference database may include a limited number of most distinct models, that would cover all bone models of the bone atlas or library.
- the most distinct models can include the 3D geometry, such as with the reference envelope, landmarks, and probabilistic region.
- the reference database shown as TOP X in Fig. 8, may thus be ready for comparative use intraoperatively with the images obtained by the image capture device 80.
- the image capture device 80 may be used to image the articular surface with sufficient resolution for the articular surface to be compared with the top distinct models of the reference database of Fig. 6.
- the probabilistic region of the bone is imaged by the image capture device 80, it is the probabilistic region that may be compared with the probabilistic region of the reference database.
- the image provided by the image capture device 80 may be tied to a model from the reference database. Because of the limited number of distinct models, the comparison may be done in real-time. Appropriate processing may be done for the model from the reference database to be scaled to the size of the imaged bone.
- landmarks L obtained intraoperatively which landmarks L may not be in the articular surface, may be tied to the bone model. For example, this center of rotation of the femoral head may be added. It may also be considered to rely on the intraoperatively imaged probabilistic region for greater resolution, and to adopt a remainder of the bone model from the reference database.
- the image capture device 80 may include one or more of a depth camera, a 3D camera, a structured light system as described above all or any of which may have spectrometer/spectroscopy capacity, a spectrometer or spectral camera of any appropriate type, that can take one or more images (e.g., photographs) of the articular surface.
- a point cloud may also be gather, such as via the use of a registration tool, or by any other appropriate way, to supplement the imaging, or as an alternative thereto, to use with the reference database.
- Use of the bone atlas or library, such as via the reference database, is optional.
- a more precise model may be sought by using various filters associated with the distinct models.
- FIG. 4 one such image is shown in the context of the knee femur, and features the articular surface and the tracker device 30.
- the image of the knee femur is taken when the leg is in flexion (e.g., full flexion, hyper flexion), for the image to include the posterior portions of the condyles and the distal portions of condyles (i.e., medial and lateral condyle).
- the image(s) may also include the tracker device 30, or like tracker.
- the tracker could be a QR code placed under the skin and visible by spectroscopy.
- At least one image(s) may include both the condyles and the tracker device 30, in the manner shown in Fig. 4, such that the geometrical relation between the articular surface and the tracker device 30 is captured in the image(s).
- the model/image may include the articular surface and the tracker device 30. This may be referred to as snapshot or instantaneous calibrating or registering of the bone surface in the coordinate system.
- a tracker device 30 is on the bone - live tracking being an alternative as described above - the bone can be tracked through the procedure, and bone images can be displayed on a GUI based on the tracking, and resection values can be provided based on the tracking and the instantaneous calibrating (e.g., varus-valgus, flexion-extension, etc.).
- a substantial amount of time is saved intraoperatively, for example by the absence of manual point registration, or in the reduction of the number of points required to form a point cloud.
- the instantaneous calibrating could be referred to as a punctual imaging calibration, and with the processing of the image(s) could last a few seconds, as an example.
- the instantaneous calibrating could rely on spectroscopic images as well to gather more data.
- the tracker device 30 has a distinct geometry, detectable members in known geometrical patterns, and/or clear edges and a precise shape and dimensions, and as the distinct geometry may be available (e.g., manufacturer file and models, preoperative model generation, etc.), the image(s) obtained from the image capture device 80 may be processed to add the image to the coordinate system.
- the tracker device 30 of the image (which image may be a 3D image) may be superposed onto the tracker device 30 being tracked with the coordinate system. Therefore, the tracking module 60 may create a landmark from part of the body of the tracker device 30 (or equivalent, such as QR code or pattern on the tracker device) and use such a landmark in subsequent image processing to add articular surface data to the coordinate system.
- the articular surface data may include different types of tissues, for instance if the image capture device 80 is operated in different spectral ranges (i.e., different wavelengths) and/or images are processed by filtering different wavelengths, to discriminate between different tissues.
- the image capture device 80 may generate a bone model, a cartilage model, a ligament model, a tendon model, an implant model, etc, using spectrometer functionalities. This may include the imaging of concealed tissue, using the spectrometer functionalities.
- the image capture device(s) 80 may image areas of the knee (as an example) where there is less fat (condyles, tibial crest) in order to get a bone image through skin.
- the imaging may include the capture of images with different illumination wavelengths by the image capture device 80 and/or the filtering of the captured light (e.g., transmitted, backscattered, reflected).
- the image capture device 80 may have the capacity to generate hyperspectral data cubes of the anatomical feature, with the hyperspectral data cubes processed by the image capture device 80 and/or the CAS controller 50 (e.g., such as by an image processing submodule), to generate models of one or more of the tissue types, with bone B being a generic way to describe all.
- the 3D image of the articular surface of Fig. 4 can be used to precisely position the articular surface in the coordinate system relative the bone landmark, such as the mechanical axis X.
- the image processing of an image from the image capture device 80 may be used to locate the articular surface in the coordinate system of the femur.
- the same approach may be used for articular surface of other bones, including the tibial plateau of the tibia, the glenoid of the scapula, the humeral head of the humerus.
- the spectral imaging through skin may useful in neurologic applications, as the imaging and processing by the image capture device(s) 80 may enable the imaging of the skull directly since skin and fat are thin around the cranium and nose.
- the navigation data that is output by the system 10 may include the image in the coordinate system, as the coordinate system is tracked using the tracker device 30, and may have a model or image added to it.
- the image may include high- resolution data, such as 3D surface information, of the articular surface.
- the 3D surface information may include a distinction between the types of human tissue.
- the image captured by the image capture device 80 may be used for matching with existing bone model from a bone model atlas or library, as described above, to supplement the patient-specific model generated from the image capture by the image capture device 80.
- Bone matching techniques may be used to find the best fit between what is imaged by the image capture device 80 and the databases of bone geometries. Since all bone models have landmarks, landmarks can be obtained from the best fit. It may also be possible to use the images of the articular surface obtained from the image capture device 80 intraoperatively, with landmarks such as a mechanical axis or anatomical axis, as bone model, without additional surfacic information, as it may suffice to have only the articular surface with an axis.
- tokens with given patterns such as QR-code like labels
- the tokens may be provided on the objects, such as on the bones or on the tracker device, and tools subsequently used during navigation, including the robot arm 20A.
- the tokens have an image that may be preprogrammed, or whose dimensions, are known or accessible by the tracking module 60. Accordingly, by seeing such tokens in the images of the image capture device 80, the tracking module 60 may locate the objects in a spatial coordinate system.
- the POV for the video images may move, with the objects serving as a referential for the tracking.
- the referential system may be that of the femur, with the camera tracking providing positional information for the referential system.
- the QR-code or like label or token is flat, and hence can be positioned under the skin without causing any substantial protrusion. It can be used as a retro-reflector in some leg configurations, such as when the leg is in extension. With appropriate lighting and imaging, the QC-code like label or token could be visible through the skin, and thus could facilitate tracking in spite of leg movement. It can be visible when the leg is in a first configuration, e.g., flexion, and then concealed under skin yet visible in a second configuration, e.g., extension (and vice-versa).
- the tracking module 60 may reduce its computation using different strategies.
- the bone model(s) B may have higher resolution for the parts of the bone that will be altered during surgery, while the remainder of the bone may be limited to information on landmarks, such as axis orientation, center of rotation, midpoints, etc.
- landmarks such as axis orientation, center of rotation, midpoints, etc.
- a similar approach may be taken for the tool models C, with the focus and higher detail resolution being on parts of the tools that come into contact with the bone.
- the tracked coordinate system with bone landmarks and model can be supplemented by the digitization of points, such as by using the registration pointer (Fig. 1) and/or painting the articular surface with such device or other device.
- the system 10 may also be used to automate the identification of articular surface landmarks.
- a posterior condyle plane and/or a distal condyle plane also known as posterior condylar plane and/or a distal condylar plane.
- the system 10 may use the model data, including the articular surface, to identify the distalmost points of the medial condyle and the lateral condyle, in the coordinate system. This may thus represent a segment or line.
- the mechanical axis of the femur, or other bone landmark may then be used to set an orientation of the distal condyle plane.
- the distal condyle plane may include the segment, line or vector passing through the distalmost points of the medial condyle and the lateral condyle, tracked with the coordinate system.
- the distal condyle plane may be oriented as a function of a flexion angle of the leg.
- the distal condyle plane P1 may be at a flexion angle a ranging from 3 to 5 degrees relative to the mechanical axis X. In another variant, the distal condyle plane P1 may be at a flexion angle a of 3 degrees relative to the mechanical axis X.
- the system 10 may also use the model data, including the articular surface, to identify the posterior-most points of the medial condyle and the lateral condyle, in the coordinate system. This may also be in the form of a segment or line. The segment, line or vector passing through the distalmost points of the medial condyle and the lateral condyle, may lie in the posterior condyle plane.
- the posterior condyle plane may be set as being perpendicular to the distal condyle plane, or may be at any given angle. Alternatively, the mechanical axis of the femur, or other bone landmark, may be used to set an orientation of the posterior condyle plane.
- the method and the system 10 described above may increase the data available during a surgical procedure, for example without the need for complex preoperative imaging, including CT scanning. This may be achieved in a time-effective manner, without the obligation to digitize individually articular surface points (though it may be done).
- the system 10 may have automated functionalities to identify bone landmarks, which bone landmarks may be part of the articular surface, and may distinguish between soft-tissue types using spectral imaging. As described above, this may include condylar planes for the femur, which condylar planes may be used thereafter as a reference for tool navigation (e.g., robot controlling), to define cut planes on the femur relative to these pre-resection condylar planes.
- the system 10 may use the condylar planes to guide a selection of implant, and provide a suitable orientation for the implant.
- the method described above, and the system 10 performing the method are well suited to be initial surgical steps, for the subsequent planning, resection, and implanting steps to rely on accurate and precise data.
- the system 10 is shown as having access to or integrating the artificial intelligence system 90, via the machine-learning (ML) module 90A and the assistance module 90B.
- the ML module 90A performs data acquisition for subsequent training of a ML algorithm.
- the data acquisition may take various forms, examples of which are provided below.
- the machine-learning module 90A receives video footage or images from surgical procedures.
- the footage and/or images may be in the form of an image feed and/or video feed from different types of cameras.
- the images and/or video feed may be obtained from the tracker 40, and/or from the image capture device 80 and/or a dedicated camera(s).
- Data acquisition by the ML module 90A may also include receiving data from the CAS controller 50, or robot controller 70 in the case of robotized surgery. In some instances, an operator performs surgery with a CAS system but without robotized assistance. In such a case, the data acquisition may include data from the CAS system.
- the ML module 90A may supplement image capture from the tracking with an identification of the bone, patient, etc., namely non-confidential data from the CAS controller 50.
- the data acquired by the ML module 90A may include patient data such as age, gender, race, ethnicity, genetics, height, weight, body mass index, congenital conditions, pathologies, medical history, etc.
- the data acquired by the ML module 90A may also include surgical flow information from the procedure operated by the CAS controller 50. This may include an identification of tools used, bones being altered, navigation data (e.g., position and/or orientation of tools relative to the bones in a referential system) the parameters of alteration (depth, orientation, navigation data), navigation of robot arm 20 if present.
- the data from the CAS controller 50 is synchronized with the video footage.
- the ML module 90A may also perform a synchronization of video data with control data from the CAS controller 50.
- an assessment of the surgery is done post-operatively.
- the ML module 90A may access this information as part of data acquisition as well.
- the assessment of surgery may take various forms, including quantitative data.
- the quantitative data may be distance or length data, such as limb length discrepancy, cut depth.
- the quantitative data may be orientation data, such as varus/valgus, offset, tilt, etc.
- the quantitative data may be volumetric data, such as volume of bone removed, volume of resection.
- the assessment may also include qualitative data, with patient feedback including pain level, perceived mobility, patient satisfaction score, etc.
- the assessment data may be acquired over a rehabilitation period, with post-operative patient follow ups and the use of wearable sensor technologies, for instance over an extended period of time.
- the ML module 90A may thus train a ML algorithm to understand surgical flow as a function of the particular surgeries.
- the training of the ML algorithm may be based on training data acquired from multiple prior surgeries, in different locations, from different SA systems 10, and/or involving different surgeons.
- the training of the ML algorithm in the ML module 90A may include at least 100 surgical procedures, without an upper echelon of review.
- the machine learning algorithm may be trained with or without supervision by observing surgeries for patients of different age, gender, race, ethnicity, genetics, height, weight, body mass index, congenital conditions, pathologies, medical history, etc., to train the ML algorithm with procedures covering a wide diversity of cases, including standard cases, and deviation cases. Age, gender, race, ethnicity, genetics, height, weight, body mass index, congenital conditions, pathologies, medical history, etc.
- the learning module 90A may produce and output a parametrized ML algorithm.
- the ML algorithm may be selected from different supervised machine learning algorithms, such as neural networks, Bayesian networks, support vector machines, instance-based learning, decision trees, random forests, linear classifiers, quadratic classifiers, linear regression, logistic regression, k- nearest neighbor, hidden Markov models, or the like.
- the ML algorithm may be selected from different unsupervised machine learning algorithms, such as expectationmaximization algorithms, vector quantization, and information bottleneck method.
- the ML module 90A may perform image processing off of the surgery imaging, video feed and/or CAS controller data, in order to identify the various tools and bones used, as well as movements and interactions between them.
- the image processing is done locally, in edge computing. This includes observing the geometry of tools, the position and orientation of tools relative to the bones, the bone surfaces including their geometries, the different sizes of tools.
- the output of image processing may be correlated with CAS controller data, such as bone names, tool names and models.
- the output of image processing may also be associated patient data, including age, gender, race, ethnicity, genetics, height, weight, body mass index, congenital conditions, pathologies, and/or medical history, etc.
- the image processing may for example be supervised by the involvement of a reviewer.
- the image processing may then be used by the ML algorithm to learn the surgical flow, i.e., observing the geometry of tools, the position and orientation of tools relative to the bones, the bones surfaces including their geometries, the different sizes of tools, and the sequences of steps of surgery vis a vis the specific details of patient data.
- the learning of the surgical flow may include understanding the sequence of steps of any particular surgical procedure.
- the sequence of steps of the surgical flow may be correlated with CAS controller data, such as bone names, tool names and models.
- the sequence of steps of the surgical flow may also be associated to patient data, including age, gender, race, ethnicity, genetics, height, weight, body mass index, congenital conditions, pathologies, and/or medical history, etc.
- the sequence of steps of the surgical flow may be associated with the assessment of the surgery done post- operatively, such as in the form of the quantitative data and/or qualitative data, to train the ML algorithm in evaluating a surgical flow and its numerous parameters as a function of post-operative assessment.
- the trained ML algorithm may have the capacity of performing various functions through its training.
- the ML algorithm may add bookmarks to the video feed.
- the bookmarks are in the form of metadata or time stamps in an audio or video track of the video feed.
- the bookmarks may be associated with particular steps of a surgical flow, deviations from standard surgical flow, rare occurrences, specific scenarios, bone and tool pairings, patient data, etc.
- the bookmarks may be configured for subsequent retrieval if access to the video feed is desired or required, for instance for training purposes. Accordingly, the ML algorithm may contribute to the creation of an atlas of video footage, with the bookmarks enabling the searching and access of desired video excerpts.
- the ML algorithm may also label steps of surgical workflows.
- the labelling of such steps may include a start time and a finish time for the surgical step, for segments of a surgical procedure, for groupings of steps, for example. Consequently, a duration of any given step may be measured, and this data may be correlated to the type of surgery, to patient data detailed above, for example, to surgeon identity.
- the duration data may be used for statistical data.
- the statistical data may consequently be used for video training, for instance to provide exemplary video segments showing more efficient steps.
- the statistical data may also be used for surgical workflow optimization.
- the assistance module 90B has been updated with the parametrized machine learning algorithm.
- the assistance module 90B accesses the server in which the parametrized ML algorithm is located, for instance locally in edge computing, but in other embodiments the ML algorithm may be on the cloud for example.
- data acquisition is performed by the assistance module 90B.
- At least some of the forms of data acquisition used by the learning module 90A may be used by the assistance module 90B.
- This includes live video footage from tracker 40 and/or the image capture device 80, communication with the CAS system 50, including the robotized surgery controller 50 of Fig. 1 , input from a surgeon, access to post-operative qualitative and/or quantitative assessment data, etc.
- the assistance module 90B may document the instant surgical procedure, notably by adding bookmarks as set out above, by labelling some of the surgical steps, etc.
- the assistance module 90B may propose bone models to add to the coordinate system.
- the assistance module 90B may rely on all available data acquired during the steps described above with reference to the system 10, to provide bone models.
- the articular surface bone landmarks such as the condylar planes, may be output by the assistance module 90B based on the training of the MLA.
- the present disclosure pertains to a system and/or a method for tracking a bone in computer-assisted surgery that may include: digitizing and tracking a coordinate system including at least one bone landmark of a bone, using a tracker on the bone; imaging the bone with the tracker; processing the imaging of the bone with the tracker to add an image of a portion of the bone to the coordinate system; and outputting navigation data of the bone including the bone landmark and the image from the tracking of the coordinate system.
- imaging the bone with the tracker includes imaging the bone with an optical tracker; imaging the bone with the tracker includes imaging via an image-capturing device providing a three- dimensional model of the bone.
- the present disclosure pertains to a system and/or a method for tracking a bone in computer-assisted surgery that may include: digitizing and tracking a coordinate system including at least one bone landmark of a bone, using at least one image capture device having depth imaging capacity to generate three-dimensional models; imaging the bone with the at least one image capture device; processing the imaging of the bone to add an image of a portion of the bone to the coordinate system; and outputting navigation data of the bone including the bone landmark and the image from the tracking of the coordinate system with the at least one image capture device.
- Outputting navigation data of the bone may include outputting navigation data of a robotic tool from the tracking of the coordinate system with the at least one image capture device.
- Outputting navigation data of the bone may include outputting navigation data of a robotic tool on which the at least one image capture device is secured.
- Imaging the bone with the at least one image capture device includes imaging the bone with cartilage using spectroscopic imaging from the at least one image capture device.
- Tracking of the coordinate system with the at least one image capture device may include tracking the bone by capturing images of the bone through soft tissue with the at least one image capture device. The tracking may be done without a tracker pinned to the bone.
- the images from the image capture device 80 can be used to validate the bone cuts.
- the bone cuts may be validated vis a vis a surgical plan, for example.
- the present disclosure pertains to a system and/or a method for tracking a bone in computer-assisted surgery that may include: digitizing and tracking a coordinate system including at least one axis of a femur; obtaining a model of an articular surface of the femur; adding the model of the articular surface of the femur to the coordinate system; generating at least one plane for the articular surface of the femur using the model; and outputting navigation data of the femur including the at least one plane from the tracking of the coordinate system.
- generating at least one plane for the articular surface of the femur includes identifying distalmost points of a lateral condyle and of a medial condyle of the femur, the distalmost points lying in the plane; generating at least one plane for the articular surface of the femur includes orienting the plane relative to the at least one axis of the femur; generating at least one plane for the articular surface of the femur includes identifying posterior most points of a lateral condyle and of a medial condyle of the femur, the posterior-most points lying in the plane; generating at least one plane for the articular surface of the femur includes orienting the plane relative to the at least one axis of the femur, the plane including the posterior-most points.
- the processing unit 51 may comprise, for example, any type of general-purpose microprocessor or microcontroller, a digital signal processing (DSP) processor, a central processing unit (CPU), an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, other suitably programmed or programmable logic circuits, or any combination thereof.
- DSP digital signal processing
- CPU central processing unit
- FPGA field programmable gate array
- reconfigurable processor other suitably programmed or programmable logic circuits, or any combination thereof.
- the memory 52 may comprise any suitable machine-readable storage medium.
- the memory 52 may comprise non-transitory computer readable storage medium, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- the memory 52 may include a suitable combination of any type of computer memory that is located either internally or externally to device, for example random-access memory (RAM), read-only memory (ROM), compact disk read-only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.
- Memory 52 may comprise any storage means (e.g., devices) suitable for retrievably storing machine-readable instructions executable by processing unit 51 .
- the methods and systems described herein may be implemented in a high level procedural or object oriented programming or scripting language, or a combination thereof, to communicate with or assist in the operation of a computer system, for example the computing device including the controller 50.
- the methods and systems described herein may be implemented in assembly or machine language.
- the language may be a compiled or interpreted language.
- Program code for implementing the methods and systems described herein may be stored on a storage media or a device, for example a ROM, a magnetic disk, an optical disk, a flash drive, or any other suitable storage media or device.
- the program code may be readable by a general or special-purpose programmable computer for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein.
- Embodiments of the methods and systems described herein may also be considered to be implemented by way of a non-transitory computer-readable storage medium having a computer program stored thereon.
- the computer program may comprise computer-readable instructions which cause a computer, or more specifically the processing unit 51 of the computing device and/or controller 50, to operate in a specific and predefined manner to perform the functions described herein, for example those described in the above-noted methods.
- Computer-executable instructions may be in many forms, including program modules, executed by one or more computers or other devices.
- program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- functionality of the program modules may be combined or distributed as desired in various embodiments.
- the embodiments described herein are implemented by physical computer hardware, including computing devices, servers, receivers, transmitters, processors, memory, displays, and networks.
- the embodiments described herein provide useful physical machines and particularly configured computer hardware arrangements.
- the embodiments described herein are directed to electronic machines and methods implemented by electronic machines adapted for processing and transforming electromagnetic signals which represent various types of information.
- the technical solution of embodiments may be in the form of a software product.
- the software product may be stored in a non-volatile or non-transitory storage medium, which can be a compact disk read-only memory (CD-ROM), a USB flash disk, or a removable hard disk.
- the software product includes a number of instructions that enable a computer device (personal computer, server, or network device) to execute the methods provided by the embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Robotics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
Un système de suivi d'un os en chirurgie assistée par ordinateur comprend un dispositif de commande comportant une unité de traitement et une mémoire non transitoire lisible par ordinateur couplée en communication à l'unité de traitement. La mémoire comprend des instructions de programme lisibles par ordinateur pouvant être exécutées par l'unité de traitement pour : numériser et suivre un système de coordonnées comprenant au moins un repère osseux d'un os, à l'aide d'un capteur de suivi sur l'os ; imager l'os à l'aide du capteur de suivi ; traiter l'imagerie de l'os à l'aide du capteur de suivi ; ajouter l'image d'une partie de l'os au système de coordonnées ; et produire des données de navigation de l'os comprenant le repère osseux et l'image provenant du suivi du système de coordonnées.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363578542P | 2023-08-24 | 2023-08-24 | |
| US63/578,542 | 2023-08-24 | ||
| US202463554571P | 2024-02-16 | 2024-02-16 | |
| US63/554,571 | 2024-02-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025039086A1 true WO2025039086A1 (fr) | 2025-02-27 |
Family
ID=94731180
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA2024/051094 Pending WO2025039086A1 (fr) | 2023-08-24 | 2024-08-23 | Procédé et système de suivi d'un os en chirurgie assistée par ordinateur |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025039086A1 (fr) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070270680A1 (en) * | 2006-03-22 | 2007-11-22 | Garrett Sheffer | Modeling method and apparatus for use in surgical navigation |
| US20100256504A1 (en) * | 2007-09-25 | 2010-10-07 | Perception Raisonnement Action En Medecine | Methods and apparatus for assisting cartilage diagnostic and therapeutic procedures |
| US20220398744A1 (en) * | 2021-06-15 | 2022-12-15 | Orthosoft Ulc | Tracking system for robotized computer-assisted surgery |
-
2024
- 2024-08-23 WO PCT/CA2024/051094 patent/WO2025039086A1/fr active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070270680A1 (en) * | 2006-03-22 | 2007-11-22 | Garrett Sheffer | Modeling method and apparatus for use in surgical navigation |
| US20100256504A1 (en) * | 2007-09-25 | 2010-10-07 | Perception Raisonnement Action En Medecine | Methods and apparatus for assisting cartilage diagnostic and therapeutic procedures |
| US20220398744A1 (en) * | 2021-06-15 | 2022-12-15 | Orthosoft Ulc | Tracking system for robotized computer-assisted surgery |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12201383B2 (en) | Bone and tool tracking in robotized computer-assisted surgery | |
| US12193758B2 (en) | Surgical system having assisted navigation | |
| US20250295464A1 (en) | Soft tissue cutting instrument and method of use | |
| US12053247B1 (en) | System for multi-directional tracking of head mounted displays for real-time augmented reality guidance of surgical procedures | |
| US20230233257A1 (en) | Augmented reality headset systems and methods for surgical planning and guidance | |
| US20220241047A1 (en) | Surgical Systems With Intra-Operative 3D Scanners and Surgical Methods Using the Same | |
| KR20180071328A (ko) | 추적된 뼈의 정합을 확인하기 위한 방법 | |
| CN114730484A (zh) | 根据二维图像数据的三维选择性骨匹配 | |
| WO2025039086A1 (fr) | Procédé et système de suivi d'un os en chirurgie assistée par ordinateur | |
| US20250064546A1 (en) | Anatomic surface and fiducial registration with an intra-operative 3d scanner | |
| US20250391551A1 (en) | System and method for managing surgical hardware | |
| US20250032133A1 (en) | Computer-assisted navigation of lock hole in implant | |
| US20240197409A1 (en) | Tool navigation in mixed reality computer-assisted surgery | |
| US20250302538A1 (en) | Virtual alignment of patient anatomy | |
| WO2025190560A1 (fr) | Marqueur multi-faces à capacité d'angle de suivi optique élevée | |
| WO2025195646A1 (fr) | Enregistrement de données préopératoires et intra-opératoires | |
| KR20250138708A (ko) | 스펙트럼 이미징 카메라(들)를 사용한 마커리스 추적 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24855184 Country of ref document: EP Kind code of ref document: A1 |