[go: up one dir, main page]

WO2025250376A1 - Lumière structurée pour enregistrement 3d sans contact dans une navigation chirurgicale à base de vidéo - Google Patents

Lumière structurée pour enregistrement 3d sans contact dans une navigation chirurgicale à base de vidéo

Info

Publication number
WO2025250376A1
WO2025250376A1 PCT/US2025/029515 US2025029515W WO2025250376A1 WO 2025250376 A1 WO2025250376 A1 WO 2025250376A1 US 2025029515 W US2025029515 W US 2025029515W WO 2025250376 A1 WO2025250376 A1 WO 2025250376A1
Authority
WO
WIPO (PCT)
Prior art keywords
laser projection
laser
image
camera
patient anatomy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/029515
Other languages
English (en)
Inventor
Tânia SOUSA BAPTISTA
Miguel Marques
Carolina DOS SANTOS RAPOSO
Luis Carlos Fial TEIXEIRA RIBEIRO
Michel Gonçalves ALMEIDA ANTUNES
João Pedro DE ALMEIDA BARRETO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smith and Nephew Asia Pacific Pte Ltd
Smith and Nephew Inc
Original Assignee
Smith and Nephew Asia Pacific Pte Ltd
Smith and Nephew Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smith and Nephew Asia Pacific Pte Ltd, Smith and Nephew Inc filed Critical Smith and Nephew Asia Pacific Pte Ltd
Publication of WO2025250376A1 publication Critical patent/WO2025250376A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Definitions

  • the present disclosure relates surgical navigation systems and methods, and more particularly to touchless registration techniques for surgical navigation systems.
  • Arthroscopic surgical procedures are minimally invasive surgical procedures in which access to the surgical site within the body is by way of small keyholes or ports through the patient’s skin.
  • the various tissues within the surgical site are visualized by way of an arthroscope placed through a port or portal, and the internal scene is shown on an external display device.
  • the tissue may be repaired or replaced through the same or additional ports.
  • computer-assisted surgical procedures e.g., surgical procedures associated with a knee or knee joint, surgical procedures associated with a hip or hip joint, etc.
  • the location of various objects with the surgical site may be tracked relative to the bone by way of images captured by an arthroscope and a three- dimensional model of the bone.
  • a system for performing touchless registration of patient anatomy includes memory storing instructions and one or more processing devices configured to execute the instructions. Executing the instructions causes the system to obtain, from a camera, an image of the patient anatomy, the image including a laser projection that is projected onto the patient anatomy using a laser projector, detect the laser projection in the image obtained from the camera, obtain a three-dimensional set of points corresponding to the detected laser projection, and register the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points.
  • detecting the laser projection includes determining a contour of the laser projection based on position information associated with the laser projector. Executing the instructions further causes the system to track visual markers on the laser projector to obtain the position information associated with the laser projector. Determining the contour includes determining a position of the contour relative to the patient anatomy based on the position information associated with the laser projector.
  • Obtaining the three-dimensional set of points includes segmenting the image, identifying regions of interest within the segmented image, and reconstructing the laser projection within the regions of interest.
  • Obtaining the three-dimensional set of points includes, for each point in the set of points within the laser projection, determining three-dimensional coordinates of the point based on a plane of the laser projection that is transformed to a coordinate system associated with the camera.
  • the laser projection is a single line.
  • the laser projection corresponds to a line formed by an intersection of a light plane projected by the laser projector and a surface of the patient anatomy.
  • Detecting the laser projection includes removing distortion from the image to obtain an undistorted image.
  • Detecting the laser projection includes obtaining a color-channel representation of the image based on a color of the laser projection, obtaining a greyscale representation of the image, and obtaining a segmentation of the laser projection based on a difference between the color-channel representation and the greyscale representation.
  • Detecting the laser projection includes obtaining a line segment corresponding to the laser projection based on the segmentation of the laser projection.
  • Obtaining the three-dimensional set of points includes performing optical triangulation on the detected laser projection.
  • the camera is an arthroscopic camera.
  • a method for performing touchless registration of patient anatomy includes, using one or more processors to execute instructions stored in memory, obtaining, from a camera, an image of the patient anatomy, the image including a laser projection that is projected onto the patient anatomy using a laser projector, detecting the laser projection in the image obtained from the camera, obtaining a three-dimensional set of points corresponding to the detected laser projection, and registering the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points.
  • detecting the laser projection includes determining a contour of the laser projection based on position information associated with the laser projector, and determining the contour includes determining a position of the contour relative to the patient anatomy based on the position information associated with the laser projector.
  • the method further includes tracking visual markers on the laser projector to obtain the position information associated with the laser projector.
  • Obtaining the three- dimensional set of points includes at least one of segmenting the image, identifying regions of interest within the segmented image, and reconstructing the laser projection within the regions of interest, for each point in the set of points within the laser projection, determining three-dimensional coordinates of the point based on a plane of the laser projection that is transformed to a coordinate system associated with the camera, and performing optical triangulation on the detected laser projection.
  • the laser projection is a single line formed by an intersection of (i) a light plane projected by the laser projector and (ii) a surface of the patient anatomy.
  • detecting the laser projection includes removing distortion from the image to obtain an undistorted image.
  • detecting the laser projection includes obtaining a color-channel representation of the image based on a color of the laser projection, obtaining a greyscale representation of the image, obtaining a segmentation of the laser projection based on a difference between the color-channel representation and the greyscale representation, and obtaining a line segment corresponding to the laser projection based on the segmentation of the laser projection.
  • the camera is an arthroscopic camera.
  • FIG. 1 shows a surgical system in accordance with at least some embodiments
  • FIG. 2 shows a conceptual drawing of a surgical site with various objects within the surgical site tracked, in accordance with at least some embodiments
  • FIG. 3 shows a method in accordance with at least some embodiments
  • FIG. 4 is an example video display showing portions of a femur and a bone fiducial during a registration procedure, in accordance with at least some embodiments;
  • FIG. 5 shows a method in accordance with at least some embodiments
  • FIGS. 6A and 6B show an example laser projector 60 and arthroscope configured in accordance with at least some embodiments
  • FIGS. 7A, 7B, 7C, and 7D show an example laser contour detection process in accordance with at least some embodiments
  • FIGS. 8A, 8B, and 8C show an example calibration process in accordance with at least some embodiments
  • FIG. 9 shows an example 3D reconstruction process based on optical triangulation techniques in accordance with at least some embodiments
  • FIGS. 10A, 10B, 10C, and 10D show example images that include synthetic laser projections in accordance with at least some embodiments
  • FIG. 11 shows an example method for performing touchless registration techniques in accordance with at least some embodiments.
  • FIG. 12 shows an example computer system or computing device configured to implement the various systems and methods of the present disclosure.
  • a processor programmed to perform various functions refers to one processor programmed to perform each and every function, or more than one processor collectively programmed to perform each of the various functions.
  • an initial reference to “a [referent]”, and then a later reference for antecedent basis purposes to “the [referent]”, shall not obviate the fact the recited referent may be plural.
  • phrases, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context.
  • the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
  • the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.”
  • a timer circuit may define a clock output.
  • the example timer circuit may create or drive a clock signal on the clock output.
  • these “inputs” and “outputs” define electrical connections and/or signals transmitted or received by those connections.
  • these “inputs” and “outputs” define parameters read by or written by, respectively, the instructions implementing the function.
  • “input” may refer to actions of a user, interactions with input devices or interfaces by the user, etc.
  • “Controller,” “module,” or “circuitry” shall mean, alone or in combination, individual circuit components, an application specific integrated circuit (ASIC), a microcontroller with controlling software, a reduced-instruction-set computer (RISC) with controlling software, a digital signal processor (DSP), a processor with controlling software, a programmable logic device (PLD), a field programmable gate array (FPGA), or a programmable system-on-a-chip (PSOC), configured to read inputs and drive outputs responsive to the inputs.
  • ASIC application specific integrated circuit
  • RISC reduced-instruction-set computer
  • DSP digital signal processor
  • PLD programmable logic device
  • FPGA field programmable gate array
  • PSOC programmable system-on-a-chip
  • proximal refers to a point or direction nearest a handle of the probe (e.g., a direction opposite the probe tip).
  • distal refers to a point or direction nearest the probe tip (e.g., a direction opposite the handle).
  • a non-transitory computer readable medium stores computer data, which data can include computer program code (or computer-executable instructions) that is executable by a computer, in machine-readable form.
  • a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals.
  • Computer readable storage media refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, optical storage, cloud storage, magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
  • server should be understood to refer to a service point that provides processing, database, and communication facilities.
  • server can refer to a single, physical processor with associated communications and data storage and database facilities, or it can refer to a networked or clustered complex of processors and associated network and storage devices, as well as operating software and one or more database systems and application software that support the services provided by the server. Cloud servers are examples.
  • a “network” should be understood to refer to a network that may couple devices so that communications may be exchanged, such as between a server and a client device or other types of devices, including between wireless devices coupled via a wireless network, for example.
  • a network may also include mass storage, such as network attached storage (NAS), a storage area network (SAN), a content delivery network (CDN) or other forms of computer or machine- readable media, for example.
  • a network may include the Internet, one or more local area networks (LANs), one or more wide area networks (WANs), wire-line type connections, wireless type connections, cellular or any combination thereof.
  • LANs local area networks
  • WANs wide area networks
  • wire-line type connections wireless type connections
  • cellular or any combination thereof may be any combination thereof.
  • sub-networks which may employ differing architectures or may be compliant or compatible with differing protocols, may interoperate within a larger network.
  • a wireless network should be understood to couple client devices with a network.
  • a wireless network may employ stand-alone ad- hoc networks, mesh networks, Wireless LAN (WLAN) networks, cellular networks, or the like.
  • a wireless network may further employ a plurality of network access technologies, including Wi-Fi, Long Term Evolution (LTE), WLAN, Wireless Router (WR) mesh, or 2nd, 3rd, 4 th or 5 th generation (2G, 3G, 4G or 5G) cellular technology, mobile edge computing (MEC), Bluetooth, 802.11 b/g/n, or the like.
  • Network access technologies may enable wide area coverage for devices, such as client devices with varying degrees of mobility, for example.
  • a wireless network may include virtually any type of wireless communication mechanism by which signals may be communicated between devices, such as a client device or a computing device, between or within a network, or the like.
  • a computing device may be capable of sending or receiving signals, such as via a wired or wireless network, or may be capable of processing or storing signals, such as in memory as physical memory states, and may, therefore, operate as a server.
  • devices capable of operating as a server may include, as examples, dedicated rackmounted servers, desktop computers, laptop computers, set top boxes, integrated devices combining various features, such as two or more features of the foregoing devices, or the like.
  • a client (or consumer or user) device may include a computing device capable of sending or receiving signals, such as via a wired or a wireless network.
  • a client device may, for example, include a desktop computer or a portable device, such as a cellular telephone, a smart phone, a display pager, a radio frequency (RF) device, an infrared (IR) device a Near Field Communication (NFC) device, a Personal Digital Assistant (PDA), a handheld computer, a tablet computer, a phablet, a laptop computer, a set top box, a wearable computer, smart watch, an integrated or distributed device combining various features, such as features of the forgoing devices, or the like.
  • RF radio frequency
  • IR infrared
  • NFC Near Field Communication
  • PDA Personal Digital Assistant
  • the client device can also be, or can communicatively be coupled to, any type of known or to be known medical device (e.g., any type of Class I, II or III medical device), such as, but not limited to, a MRI machine, CT scanner, Electrocardiogram (ECG or EKG) device, photopletismograph (PPG), Doppler and transmit-time flow meter, laser Doppler, an endoscopic device neuromodulation device, a neurostimulation device, and the like, or some combination thereof.
  • a MRI machine e.g., any type of Class I, II or III medical device
  • ECG or EKG Electrocardiogram
  • PPG photopletismograph
  • Doppler and transmit-time flow meter e.g., laser Doppler
  • an endoscopic device neuromodulation device e.g., a neurostimulation device
  • neurostimulation device e.g., a neurostimulation device, and the like, or some combination thereof.
  • CAS Computer-Aided Surgery
  • surgical navigation systems support surgeons in planning and performing complex surgical procedures with increased precision and accuracy.
  • arthroscopy is a minimally invasive medical procedure for diagnosing and treating joint problems.
  • An orthopedic surgeon makes a small incision in the skin of the patient and inserts a lens into the incision.
  • the lens is attached to a camera (e.g., an endoscopic camera) and coupled to a light source, allowing the joint to be visualized and treated.
  • Surgical navigation and CAS systems have had significant impact in minimally invasive surgeries (MIS) such as arthroscopic procedures because the increased difficulty in visualizing the anatomy of the patient further complicates the surgical workflow.
  • MIS minimally invasive surgeries
  • Video-based surgical navigation uses visual fiducials or markers (also called visual markers) attached to patient anatomy to guide the surgeon throughout the medical procedure.
  • the videobased navigation process requires the precise registration of a pre-operative anatomical model with data acquired intra-operatively.
  • the registration process requires the surgeon to digitize the surface of interest that corresponds to the preoperative model.
  • the visual markers attached to the anatomies define reference frames to which the pre-operative model and the intra-operative acquired data are aligned.
  • fiducial markers with known visual patterns may be attached both to the targeted anatomy and to the instruments and subsequently tracked such that their relative poses can be accurately estimated (e.g., by applying 3D computer vision methods on the images/video acquired by a camera). These relative poses allow the instruments to be located with respect to the anatomy at every frame time instant.
  • VBSN facilitates the tracking of instruments with respect to the targeted anatomy to which a fiducial is rigidly attached (which may be referred to as a “base marker”).
  • the registration process may be touch-based.
  • Touchbased registration includes touching various points or “painting” patient anatomy using a touch probe to digitize anatomical surfaces.
  • touch-based registration processes are time-consuming and complex.
  • surgical navigation systems and methods of the present disclosure are directed to touchless registration techniques.
  • touchless registration techniques of the present disclosure include the use of a simple and affordable laser projector.
  • a laser projector is used to project a contour onto the anatomical surface, which is subsequently detected in a video feed provided by an arthroscopic camera using image processing techniques.
  • the 3D position of the identified laser contour in the reference frame of the camera can be determined.
  • a reconstructed contour can be represented in the reference frame of the patient and subsequently registered with the pre-operative model.
  • the principles of the present disclosure may be implemented using other types of cameras used for surgical procedures, such as other types of fiber optic cameras or other cameras separate from (i.e., not integrated with) the laser projector (endoscopic cameras, laparoscopic cameras, robotic surgery cameras, etc.).
  • FIG. 1 shows an example surgical system (e.g., a system including or implementing an arthroscopic video-based navigation system) 100 in accordance with at least some embodiments of the present disclosure.
  • the example surgical system 100 comprises a tower or device cart 102 and various tools or instruments, such as an example mechanical resection instrument 104, an example plasma-based ablation instrument (hereafter just ablation instrument 106), and an endoscope in the example form of an arthroscope 108 and attached camera head or camera 110.
  • the arthroscope 108 may be a rigid device, unlike endoscopes for other procedures, such as upper-endoscopies.
  • the device cart 102 may comprise a display device 114, a resection controller 116, and a camera control unit (CCU) together with an endoscopic light source and video (e.g., a VBN) controller 118.
  • the combined CCU and video controller 118 not only provides light to the arthroscope 108 and displays images received from the camera 110, but also implements various additional aspects, such as registering a three- dimensional bone model with the bone visible in the video images, and providing computer-assisted navigation during the surgery.
  • the combined CCU and video controller are hereafter referred to as surgical controller 118.
  • the CCU and video controller may be a separate and distinct system from the controller that handles registration and computer-assisted navigation, yet the separate devices would nevertheless be operationally coupled.
  • the example device cart 102 further includes a pump controller 122 (e.g., single or dual peristaltic pump). Fluidic connections of the mechanical resection instrument 104 and ablation instrument 106 to the pump controller 122 are not shown so as not to unduly complicate the figure. Similarly, fluidic connections between the pump controller 122 and the patient are not shown so as not to unduly complicate the figure. In the example system, both the mechanical resection instrument 104 and the ablation instrument 106 are coupled to the resection controller 116 being a dualfunction controller. In other cases, however, there may be a mechanical resection controller separate and distinct from an ablation controller.
  • the example devices and controllers associated with the device cart 102 are merely examples, and other examples include vacuum pumps, patient-positioning systems, robotic arms holding various instruments, ultrasonic cutting devices and related controllers, patientpositioning controllers, and robotic surgical systems.
  • FIGS. 1 and 2 further show additional instruments that may be present during an arthroscopic surgical procedure.
  • an example probe 124 e.g., shown as a touch probe, but which may be a touchless probe in other examples
  • a drill guide or aimer 126 e.g., a drill guide or aimer 126
  • a bone fiducial 128 e.g., a bone fiducial 128
  • the probe 124 may be used during the surgical procedure to provide information to the surgical controller 118, such as information to register a three-dimensional bone model to an underlying bone visible in images captured by the arthroscope 108 and camera head 110.
  • the aimer 126 may be used as a guide for placement and drilling with a drill wire to create an initial or pilot tunnel through the bone.
  • the bone fiducial 128 may be affixed or rigidly attached to the bone and serve as an anchor location for the surgical controller 118 to know the position and orientation of the bone (e.g., after registration of a three-dimensional bone model). Additional tools and instruments may be present, such as the drill wire, various reamers for creating the throughbore and counterbore aspects of a tunnel through the bone, and various tools, such as for suturing and anchoring a graft. These additional tools and instruments are not shown so as not to further complicate the figure.
  • Example workflow for a surgical procedure is described below. While described with respect to an example anterior cruciate ligament repair procedure, the below techniques may also be performed for other types of surgical procedures, such as hip procedures or other procedures that include joint distraction.
  • a surgical procedure may begin with a planning phase.
  • An example procedure may start with imaging (e.g., X-ray imaging, computed tomography (CT), magnetic resonance imaging (MRI)) of the anatomy of the patient, including the relevant anatomy (e.g., for a knee procedure the lower portion of the femur, the upper portion of the tibia, and the articular cartilage; for a hip procedure, an upper portion of the femur, the acetabulum/hip joint, pelvis, etc.).
  • imaging e.g., X-ray imaging, computed tomography (CT), magnetic resonance imaging (MRI)
  • CT computed tomography
  • MRI magnetic resonance imaging
  • the imaging may be preoperative imaging, hours or days before the intraoperative repair, or the imaging may take place within the surgical setting just prior to the intraoperative repair.
  • the discussion that follows assumes MRI imaging, but again many different types of imaging may be used.
  • the image slices from the MRI imaging can be segmented such that a volumetric model or three-dimensional model of the anatomy is created. Any suitable currently available, or after developed, segmentation technology may be used to create the three-dimensional model. More specifically to the example of anterior cruciate ligament repair, a three-dimensional bone model of the lower portion of the femur, including the femoral condyles, is created. Conversely, for a hip procedure, a three- dimensional model of the upper portion of the femur and at least a portion of the pelvis (e.g., the acetabulum) is created.
  • an operative plan is created.
  • the results of the planning may include: a three-dimensional bone model of the distal end of the femur; a three-dimensional bone model for a proximal end of the tibia; an entry location and exit location through the femur and thus a planned-tunnel path for the femur; and an entry location and exit location through the tibia and thus a planned-tunnel path through the tibia.
  • Other surgical parameters may also be selected during the planning, such as tunnel throughbore diameters, tunnel counterbore diameters and depth, desired post-repair flexion, and the like, but those additional surgical parameters are omitted so as not to unduly complicate the specification.
  • the results of the planning may include a three-dimensional bone model of the proximal end of the femur; a three-dimensional bone model for at least a portion of the pelvis/hip joint (e.g., a region of the pelvis corresponding to the acetabulum); a surgical area of interest within the hip joint; and parameters associated with achieving an amount of distraction in the surgical area of interest to provide sufficient access to the surgical area of interest.
  • example hip procedures may include, but are not limited to, labral repair, femoroacetabular impingement (FAI) debridement (e.g., removal of bone spurs/growths), cartilage repair, and synovectomy (e.g., removal of inflamed tissue).
  • FAI femoroacetabular impingement
  • synovectomy e.g., removal of inflamed tissue.
  • These example procedures typically require access to a specific surgical area of interest within the hip joint (i.e., in a specific area within an interface between the pelvis and the femoral head, such as an area around/surrounding a bone spur or growth, cartilage or tissue to be repaired or removed, etc.).
  • the intraoperative aspects include steps and procedures for setting up the surgical system to perform the various repairs. It is noted, however, that some of the intraoperative aspects (e.g., optical system calibration) may take place before any portals or incisions are made through the patient’s skin, and in fact before the patient is wheeled into the surgical room. Nevertheless, such steps and procedures may be considered intraoperative as they take place in the surgical setting and with the surgical equipment and instruments used to perform the actual repair.
  • An example procedure can be conducted arthroscopically and is computer- assisted in the sense that the surgical controller 118 is used for arthroscopic navigation within the surgical site. More particularly, in example systems the surgical controller 118 provides computer-assisted navigation during the procedure by tracking locations of various objects within the surgical site, such as the location of the bone within the three-dimensional coordinate space of the view of the arthroscope, and location of the various instruments within the three-dimensional coordinate space of the view of the arthroscope. A brief description of such tracking techniques is described below.
  • FIG. 2 shows a conceptual drawing of a surgical site with various objects (e.g., surgical instruments/tools) within the surgical site.
  • objects e.g., surgical instruments/tools
  • FIG. 2 shows a conceptual drawing of a surgical site with various objects (e.g., surgical instruments/tools) within the surgical site.
  • objects e.g., surgical instruments/tools
  • FIG. 2 shows a conceptual drawing of a surgical site with various objects (e.g., surgical instruments/tools) within the surgical site.
  • objects e.g., surgical instruments/tools
  • the arthroscope 108 illuminates the surgical site with visible light.
  • the illumination is illustrated by arrows 208.
  • the illumination provided to the surgical site is reflected by various objects and tissues within the surgical site, and the reflected light that returns to the distal end enters the arthroscope 108, propagates along an optical channel within the arthroscope 108, and is eventually incident upon a capture array within the camera 110 (FIG. 1 ).
  • the images detected by the capture array within the camera 110 are sent electronically to the surgical controller 118 (FIG. 1 ) and displayed on the display device 114 (FIG. 1 ).
  • the arthroscope 108 is monocular or has a single optical path through the arthroscope for capturing images of the surgical site, notwithstanding that the single optical path may be constructed of two or more optical members (e.g., glass rods, optical fibers). That is to say, in example systems and methods the computer-assisted navigation provided by the arthroscope 108, the camera 110, and the surgical controller 118 is provided with the arthroscope 108 that is not a stereoscopic endoscope having two distinct optical paths separated by an interocular distance at the distal end endoscope.
  • the computer-assisted navigation provided by the arthroscope 108, the camera 110, and the surgical controller 118 is provided with the arthroscope 108 that is not a stereoscopic endoscope having two distinct optical paths separated by an interocular distance at the distal end endoscope.
  • Viewing direction refers to a line residing at the center of an angle subtended by the outside edges or peripheral edges of the view of an endoscope.
  • the viewing direction for some arthroscopes is aligned with the longitudinal central axis of the arthroscope, and such arthroscopes are referred to as “zero degree” arthroscopes (e.g., the angle between the viewing direction and the longitudinal central axis of the arthroscope is zero degrees).
  • the viewing direction of other arthroscopes forms a non-zero angle with the longitudinal central axis of the arthroscope.
  • the viewing direction forms a 30° angle to the longitudinal central axis of the arthroscope, the angle measured as an obtuse angle beyond the distal end of the arthroscope.
  • the view angle 210 of the arthroscope 108 forms a non-zero angle to the longitudinal central axis 212 of the arthroscope 108.
  • the example bone fiducial 128 is multifaceted element, with each face or facet having a fiducial disposed or created thereon. However, the bone fiducial need not have multiple faces, and in fact may take any shape so long as that shape can be tracked within the video images.
  • the bone fiducial such as bone fiducial 128, may be attached to the bone 200 in any suitable form (e.g., via the screw portion of the bone fiducial 128 visible in FIG. 1 ).
  • the patterns of the fiducials on each facet are designed to provide information regarding the position and orientation of the bone fiducial 128 in the three-dimensional coordinate space of the view of the arthroscope 108. More particularly, the pattern is selected such that the position and orientation of the bone fiducial 128 may be determined from images captured by the arthroscope 108 and attached camera (FIG. 1 ).
  • the probe 124 is also shown as partially visible within the view of the arthroscope 108. The probe 124 may be used, as discussed more below, to identify a plurality of surface features on the bone 200 as part of the registration of the bone 200 to the three-dimensional bone model.
  • the probe 124 and/or the aimer 126 may carry their own, unique fiducials, such that their respective poses may be calculated from the one or more fiducial present in the video stream.
  • the medical instrument used to help with registration of the three-dimensional bone model be it the probe 124, the aimer 126, or any other suitable medical device, may omit carrying fiducials. Stated otherwise, in such examples the medical instrument has no fiducial markers. In such cases, the pose of the medical instrument may be determined by a machine learning model, discussed in more detail below.
  • the images captured by the arthroscope 108 and attached camera are subject to optical distortion in many forms.
  • the visual field between distal end of the arthroscope 108 and the bone 200 within the surgical site is filled with fluid, such as bodily fluids and saline used to distend the joint.
  • fluid such as bodily fluids and saline used to distend the joint.
  • Many arthroscopes have one or more lenses at the distal end that widen the field of view, and the wider field of view causes a “fish eye” effect in the captured images.
  • the optical elements within the arthroscope e.g., rod lenses
  • the camera may have various optical elements for focusing the images received onto the capture array, and the various optical elements may have aberrations inherent to the manufacturing and/or assembly process.
  • the endoscopic optical system prior to use within each surgical procedure, is calibrated to account for the various optical distortions.
  • the calibration creates a characterization function that characterizes the optical distortion, and further analysis of the frames of the video stream may be, prior to further analysis, compensated using the characterization function.
  • the next example step in the intraoperative procedure is the registration of the bone model created during the planning stage.
  • the three-dimensional bone model is obtained by or provided to the surgical controller 118.
  • the surgical controller 118 receives the three- dimensional bone model, and assuming the arthroscope 108 is inserted into the knee by way of a port or portal through the patient’s skin, the surgical controller 118 also receives video images of a portion of the lower end of the femur.
  • the surgical controller 118 registers the three-dimensional bone model to the images of the femur received by way of the arthroscope 108 and camera 110.
  • the bone fiducial 128 is attached to the femur.
  • the bone fiducial placement is such that the bone fiducial is within the field of view of the arthroscope 108.
  • the bone fiducial 128 is placed within the intercondylar notch superior to the expected location of the tunnel through lateral condyle.
  • the bone fiducial 128 is placed on the femoral head.
  • the surgical controller 118 (FIG. 1 ) is provided or determines a plurality of surface features of an outer surface of the bone.
  • Identifying the surface features may take several forms, including a touch-based registration using the probe 124 without a carried fiducial, a touchless registration technique in which the surface features are identified after resolving the motion of the arthroscope 108 and camera relative to the bone fiducial 128, and a third technique in which uses a patient-specific instrument.
  • the surgeon may touch a plurality of locations using the probe 124 (FIG. 1 ).
  • receiving the plurality of surface features of the outer surface of the bone may involve the surgeon “painting” the outer surface of the bone.
  • “Painting” is a term of art that does not involve application of color or pigment, but instead implies motion of the probe 124 when the distal end of the probe 124 is touching bone.
  • the probe 124 does not carry or have a fiducial visible to the arthroscope 108 and the camera 110. It follows that the pose of the probe 124 and the location of the distal tip of the probe 124 needs to be determined in order to gather the surface features for purposes of registering the three- dimensional bone model.
  • FIG. 3 shows a method 300 in accordance with at least some embodiments of the present disclosure.
  • the example method 300 may be implemented in software within a computer system, such as the surgical controller 118.
  • the example method 300 comprises obtaining a three-dimensional bone model (block 302). That is to say, in the example method 300, what is obtained is the three- dimensional bone model that may be created by segmenting a plurality of non-invasive images (e.g., CT, MRI) taken preoperatively or intraoperatively. With the bone segmented from or within the images, the three-dimensional bone model may be created.
  • a plurality of non-invasive images e.g., CT, MRI
  • the three-dimensional bone may take any suitable form, such as a computer- aided design (CAD) model, a point cloud of data points with respect to an arbitrary origin, or a parametric representation of a surface expressed using analytical mathematical equations.
  • CAD computer- aided design
  • the three-dimensional bone model is defined with respect to the origin and in any suitable an orthogonal basis.
  • the next step in the example method 300 is capturing video images of the bone fiducial attached to the bone (block 304).
  • the capturing is performed intraoperatively.
  • the capturing of video images is by way of the arthroscope 108 and camera 110.
  • Other endoscopes may be used, such as endoscopes in which the capture array resides at the distal end of the device (e.g., chip-on-the-tip devices).
  • the capturing may be by any suitable camera device, such as one or both cameras of a stereoscopic camera system, or a portable computing device, such as a tablet or smart-phone device.
  • the video images may be provided to the surgical controller 118 in any suitable form.
  • the next step in the example method 300 is determining locations of a distal tip of the medical instrument visible within the video images (block 306), where the distal tip is touching the bone in at least some of the frames of the video images, and the medical instrument does not have a fiducial. Determining the locations of the distal tip of the medical instrument may take any suitable form. In one example, determining the locations may include segmenting the medical instrument in the frames of the video images (block 308). The segmenting may take any suitable form, such as applying the video images to a segmentation machine learning algorithm.
  • the segmentation machine learning algorithm may take any suitable form, such as neural network or convolution neural network trained with a training data set showing the medical instrument in a plurality of known orientations. The segmentation machine learning algorithm may produce segmented video images where the medical instrument is identified or highlighted in some way (e.g., box, brightness increased, other objects removed).
  • the example method 300 may estimate a plurality of poses of the medical instrument within a respective plurality of frames of the video images (block 310).
  • the estimating the poses may take any suitable form, such as applying the video images to a pose machine learning algorithm.
  • the pose machine learning algorithm may take any suitable form, such as neural network or convolution neural network trained to perform six-dimensional pose estimation.
  • the resultant of the pose machine learning algorithm may be, for at least some of the frames of the video image, an estimated pose of the medical instrument in the reference frame of the video images and/or in the reference frame provided by the bone fiducial. That is, the resultant of the pose machine learning algorithm may be a plurality of poses, one pose each for at least some of the frames of the segmented video images. While in many cases a pose may be determined for each frame, in other cases it may not be possible to make a pose estimation for at least some frame because of video quality issues, such as motion blur caused by electronic shutter operation.
  • the next step in the example method 300 is determining the locations based on the plurality of poses (block 312).
  • the location of the distal tip can be determined in the reference frame of the video images and/or the bone fiducial.
  • the resultant is a set of locations that, at least some of which, represent locations of the outer surface of the bone.
  • FIG. 3 shows an example three-step process for determining the locations of the distal tip of the medial instrument.
  • a single machine learning model such as a convolution neural network
  • the convolution neural network may segment the medical instrument, perform the six-dimensional pose estimation, and determine the location of the distal tip in each frame.
  • the training data set in such a situation would include a data set in which each frame has the medical device segmented, the sixdimensional pose identified, and the location of the distal tip identified.
  • the output of the determining step 306 may be a segmented video stream distinct from the video images captured at step 304.
  • the later method steps may use both segmented video stream and the video images to perform the further tasks.
  • the location information may be combined with the video images, such as being embedded in the video images, or added as metadata to each frame of the video images.
  • FIG. 4 is an example video display showing portions of a femur and a bone fiducial during a registration procedure. Although described with respect to a distal end of a femur, the principles and techniques described and shown in FIG. 4 can be applied to other anatomical structures/procedures, such as a femoral head for hip procedures as described herein.
  • the display may be shown, for example, on the display device 114 associated with the device cart 102, or any other suitable location.
  • visible in the main part of the display of FIG. 4 is an intercondylar notch 400, a portion of the lateral condyle 402, a portion the medial condyle 404, and the example bone fiducial 128.
  • Shown in the upper right corner of the example display is a depiction of the bone, which may be a rendering 406 of the bone created from the three-dimensional bone model. Shown on the rendering 406 is a recommended area 408, the recommended area 408 being portions of the surface of the bone to be “painted” as part of the registration process. Shown in the lower right corner of the example display is a depiction of the bone, which again may be a rendering 412 of the bone created from the three-dimensional bone model. Shown on the rendering 412 are a plurality of surface features 416 on the bone model that have been identified as part of the registration process. Further shown in the lower right corner of the example display is progress indicator 418, showing the progress of providing and receiving of locations on the bone. The example progress indicator 418 is a horizontal bar having a length that is proportional to the number of locations received, but any suitable graphic or numerical display showing progress may be used (e.g., 0% to 100%).
  • the surgical controller 118 receives the surface features on the bone, and may display each location both within the main display as dots or locations 416, and within the rendering shown in the lower right corner. More specifically, the example surgical controller 118 overlays indications of identified surface features 416 on the display of the images captured by the arthroscope 108 and camera 110, and in the example case shown, also overlays indications of identified surface features 416 on the rendering 412 of the bone model. Moreover, as the number of identified locations 416 increases, the surgical controller 118 also updates the progress indicator 418.
  • the plurality of surface features 416 may be, or the example surgical controller 118 may generate, a registration model relative to the bone fiducial 128 (block 314).
  • the registration model may take any suitable form, such as a computer-aided design (CAD) model or point cloud of data points in any suitable orthogonal basis.
  • CAD computer-aided design
  • the registration model regardless of the form, may have fewer overall data points or less “structure” than the bone model created by the non-invasive computer imaging (e.g., MRI).
  • the goal of the registration model is to provide the basis for the coordinate transforms and scaling used to correlate the bone model to the registration model and relative to the bone fiducial 128.
  • the next step in the example method 300 is registering the bone model relative to the location of the bone fiducial based on the registration model (block 316).
  • Registration may conceptually involve testing a plurality of coordinate transformations and scaling values to find a correlation that has a sufficiently high correlation or confidence factor. Once a correlation is found with the sufficiently high confidence factor, the bone model is said to be registered to the location of the bone fiducial. Thereafter, the example registration method 300 may end (block 318); however, the surgical controller 118 may then use the registered bone model to provide computer-assisted navigation regarding a procedure involving the bone.
  • registration of the bone model involves a touch-based registration technique using the probe 124 without a carried fiducial.
  • other registration techniques are possible, such as a touchless registration technique.
  • the example touchless registration technique again relies on placement of the bone fiducial 128.
  • the bone fiducial may have fewer faces with respective fiducials. Once placed, the bone fiducial 128 represents a fixed location on the outer surface of the bone in the view of the arthroscope 108, even as the position of the arthroscope 108 is moved and changed relative to the bone fiducial 128.
  • the surgical controller 118 determines a plurality of surface features of an outer surface of the bone, and in this example determining the plurality of surface features is based on a touchless registration technique in which the surface features are identified based on motion of the arthroscope 108 and camera 110 relative to the bone fiducial 128.
  • Another technique for registering the bone model to the bone uses a patientspecific instrument.
  • a registration model is created, and the registration model is used to register the bone model to the bone visible in the video images.
  • the registration model is used to determine a coordinate transformation and scaling to align the bone model to the actual bone.
  • use of the registration model may be omitted, and instead the coordinate transformations and scaling may be calculated directly.
  • FIG. 5 shows a method 500 in accordance with at least some embodiments.
  • the example method may be implemented in software within one or more computer systems, such as, in part, the surgical controller 118.
  • the example method 500 comprises obtaining a three-dimensional bone model (block 502).
  • a three-dimensional bone model that may be created by segmenting a plurality of non-invasive images (e.g., MRI) taken preoperatively or intraoperatively.
  • the method 500 further includes generating a patient-specific instrument that has a feature designed to couple to the bone represented in the bone model in only one orientation (block 504).
  • Generating the patient-specific instrument may first involve selecting a location at which the patient-specific instrument will attach.
  • a device or computer system may analyze the bone model and select the attachment location.
  • the attachment location may be a unique location in the sense that, if a patient-specific instrument is made to couple to the unique location, the patient-specific instrument will not couple to the bone at any other location.
  • the location selected may be at or near the upper or superior portion on the intercondylar notch.
  • the bone model shows another location with a unique feature, such as a bone spur or other raised or sunken surface anomaly
  • a unique location may be selected as the attachment location for the patient-specific instrument.
  • the location may be selected based on a location, within the hip joint, of a bone spur or other anatomical feature associated with the hip procedure.
  • forming the patient-specific instrument may take any suitable form.
  • a device or computer system may directly print, such as using a 3D printer, the patient-specific instrument.
  • the device or computer system may print a model of the attachment location, and the model may then become the mold for creating the patient-specific instrument.
  • the model may be the mold for an injection-molded plastic or casting technique.
  • the patient-specific instrument carries one or more fiducials, but as mentioned above, in other cases the patient-specific instrument may itself be tracked and thus carry no fiducials.
  • the method 500 further includes coupling the patient-specific instrument to the bone, in some cases the patient-specific instrument having the fiducial coupled to an exterior surface (block 506).
  • the attachment location for the patient-specific instrument can be selected to be unique such that the patient-specific instrument couples to the bone in only one location and in only one orientation.
  • the patient-specific instrument may be inserted arthroscopically. That is, the attachment location may be selected such that a physical size of the patient-specific instrument enables insertion through the ports/portals in the patient’s skin.
  • the patient-specific instrument may be made or constructed of a flexible material that enables the patient-specific instrument to deform for insertion in the surgical site, yet return to the predetermined shape for coupling to the attachment location.
  • the patient-specific instrument may be a rigid device with fewer size restrictions.
  • the method 500 further includes capturing video images of the patientspecific instrument (block 508).
  • the capturing may be performed intraoperatively.
  • the capturing of video images is by the surgical controller 118 by way of arthroscope 108 and camera 110.
  • the capturing may be by any suitable camera device, such as one or both cameras of a stereoscopic camera systems, or a portable computing device, such as a tablet or smart-phone device.
  • the video images may be provided to the surgical controller 118 in any suitable form.
  • the example method 500 further includes registering the bone model based on the location of the patient-specific instrument (block 510). That is, given that the patient-specific instrument couples to the bone at only one location and in only one orientation, the location and orientation of the patient-specific instrument is directly related to the location and origination of the bone, and thus the coordinate transformations and scaling for the registration may be calculated directly. Thereafter, the example method 500 may end; however, the surgical controller 118 may then use the registered bone model to provide computer-assisted navigation regarding a surgical task or surgical procedure involving the bone.
  • the surgical controller 118 may provide guidance regarding a surgical task of a surgical procedure.
  • the specific guidance is dependent upon the surgical procedure being performed and the stage of the surgical procedure.
  • a non-exhaustive list of guidance comprises: changing a drill path entry point; changing a drill path exit point; aligning an aimer along a planned drill path; showing location at which to cut and/or resect the bone; reaming the bone by a certain depth along a certain direction; placing a device (suture, anchor or other) at a certain location; placing a suture at a certain location; placing an anchor at a certain location; showing regions of the bone to touch and/or avoid; and identifying regions and/or landmarks of the anatomy.
  • the guidance may include highlighting within a version of the video images displayed on a display device, which can be the arthroscopic display or a see-through display, or by communicating to a virtual reality device or a robotic tool.
  • Surgical navigation systems and methods of the present disclosure use a laser projector and arthroscope to perform touchless registration techniques as described below in more detail.
  • a simple and affordable laser projector e.g., a laser scanner.
  • the laser projector is used to project a contour onto the anatomical surface (which may be referred to as “structured light”).
  • the projected contour is subsequently detected in the arthroscopic video using an image processing technique.
  • the 3D position of the identified laser contour in the reference frame of the camera can be determined.
  • the contour is reconstructed and represented in the reference frame of the patient and registered with the pre-operative model.
  • touchless registration registration that does not require contact between a touch probe and an anatomical surface
  • Touchless registration registration that does not require contact between a touch probe and an anatomical surface
  • touchless registration is more efficient, eliminates the time-consuming process of physical digitization, and minimizes risk of bone or cartilage damage.
  • the area accessible by the surgeon is increased.
  • touchless registration facilitates femoral acetabular impingement (FAI) and other procedures where it is difficult to access the target anatomy.
  • FAI femoral acetabular impingement
  • structured light techniques include projecting a contour independently of the surface of interest. Consequently, 3D points on surfaces that are not contained in the pre-operative 3D model may be reconstructed, resulting in a percentage of outliers prohibitively large for some registration algorithms to function properly. Accordingly, systems and methods according to the present disclosure include using a deep learning-based model that automatically segments the arthroscopic images and identifies the regions-of-interest for which points will be reconstructed.
  • a low-cost structured light (SL) system that includes a calibrated laser projector and a standard arthroscope and enables the detection of the projected light contour for inferring 3D point data intra-operatively and accomplishing the registration
  • a deep learning-based model designed for automatic segmentation of the arthroscopic video that identifies the regions of the arthroscopic images that correspond to structures in the pre-operative anatomical model, enhancing precision and efficiency in the registration process
  • a data augmentation technique that improves the performance of the automatic arthroscopic image segmentation model by synthesizing the laser projection in the arthroscopic images.
  • the laser projector includes one or more fiducial markers.
  • the laser projector is pre-calibrated using the fiducial markers. Accordingly, a pose of the laser projector can be tracked (e.g., by the surgical navigation system as described herein).
  • the laser projector is used to project a laser onto an anatomical surface shown in an arthroscopic image.
  • the image is processed to remove distortion. In other examples, removal of distortion can be omitted, performed in a subsequent step, etc.
  • a subtraction operation is performed on the image (e.g., to subtract a particular color channel representation of the image from a grayscale representation of the image) to obtain a subtraction result and segmentation of the laser projection is performed on the subtraction result.
  • a binary mask and various processing techniques are applied to obtain a contour line of the surface. For example, 3D points of the contour line can be reconstructed using techniques such as optical triangulation.
  • FIGS. 6A and 6B show an example laser projector 600 and arthroscope 604 configured in accordance with the principles of the present disclosure.
  • the laser projector 600 and arthroscope 604 are configured for use with a surgical navigation system, such as the system 100.
  • the laser projector 600 includes one or more fiducial markers 608 (e.g., located on a distal tip or end 612 of the laser projector 600).
  • the fiducial markers 608 may be located as closely as possible to a source/output of a laser contour projected from the laser projector 600, such as an output lens 616.
  • a relative pose between the laser projector 600 and the arthroscope 604 is known at every frame-time instant by tracking the fiducial markers 608 using the surgical navigation techniques discussed herein.
  • the laser projector 600 and the arthroscope 604 can be inserted through different portals providing access to the surgical site and can be moved independently of one another.
  • the laser projector 604 can be calibrated pre-operatively (e.g., the equation of the plane of light in laser coordinates is estimated).
  • arthroscopic footage a feed of arthroscopic images
  • the laser contour is detected using image processing techniques.
  • a deep learning-based model is introduced to segment areas within the arthroscopic video that correspond to surfaces in the pre-operative 3D model (e.g., femur bone and cartilage surfaces).
  • a 3D point set By reconstructing the laser projection within the segmented regions, a 3D point set can be recovered. This process filters outlier 3D points, enhancing the accuracy of the data. Resulting contours or curves represent the anatomical structures in the arthroscopic footage. Registration (e.g., curve-surface registration) can then be performed using the point cloud.
  • Example surgical navigation techniques using structured light include detection of the laser projection/line/contour in the arthroscopic image (described in more detail in FIGS. 7A, 7B, 70, and 7D), calibration of the plane of light of the laser projector (described in more detail in FIGS. 8A, 8B, and 80), and 3D point reconstruction through triangulation (described in more detail in FIG. 9).
  • FIGS. 7A, 7B, 70, and 7D An example laser contour detection process 700 is described in FIGS. 7A, 7B, 70, and 7D.
  • distortion is removed from an original input image 706 (e.g., an image from an image feed obtained by an arthroscope).
  • the input image 706 includes a laser projection/contour 708 as projected upon a surface 710 by the laser projector.
  • the laser contour 708 corresponds to a green laser projection.
  • An undistorted image i.e. , an image resulting from removing the distortion from the original input image 706) is shown at 712.
  • a green channel (or other color-channel) representation 720 of the undistorted image 712 is subtracted from a grayscale image 722 (a grayscale representation of the undistorted image 712) to obtain a subtraction result 724.
  • the subtraction result 724 is binarized as shown at 728 to obtain a binary image 730, which shows a segmentation 732 of the laser projection in FIG. 7C. Since the laser projection as shown in FIG. 7A exhibits significant dispersion, the segmentation 732 of the laser projection/contour 708 appears as an amorphous blob or shape in FIG. 7C (as opposed to a defined line or other structure light shape). Contour detection is then performed as shown at 736 in FIG.
  • a principal component analysis (PCA)-based approach fits a contour to the segmentation 732 by (i) extracting a direction of maximum variance, (ii) sampling the segmentation 732 along the direction of maximum variance, and (iii) for each sample, determining a midpoint of a line segment contained in the segmentation 732 with perpendicular direction.
  • a contour resulting from contour detection is shown as the line 738.
  • removal of distortion can optionally be omitted, performed in a difference sequence, etc.
  • the distortion can be removed subsequent to contour detection.
  • the laser projector can be configured to emit light having other types of patterns or geometries (e.g., sets of lines or contours, sets of points, a single point, etc.).
  • the steps described in FIGS. 7A, 7B, and 7C can be performed as described above, and contour detection as described in FIG. 7D can be configured to detect the relevant type of pattern (e.g., using suitable computer vision or machine learning techniques).
  • a calibration process 800 may be performed to calibrate the laser projector (estimate an equation of the plane of light in coordinates of the laser projector) using one or more calibration images 802, 804, and 806 as shown in FIGS. 8A, 8B, and 8C, respectively.
  • the calibration images 802, 804, and 806 may be captured/obtained using a camera (e.g., an arthroscope) or other imaging device during the calibration process 800.
  • a planar target e.g., a planar surface
  • fiducial markers 810 e.g., printed visual markers
  • the planar target may include a predetermined pattern, such as a checkerboard pattern.
  • a laser projector 812 is used to project a laser (e.g., laser line projection 814) onto the planar target 808. As shown, the laser projector 812 also includes one or more fiducial markers 816.
  • the planar target 808 is a bottom surface of a box or other container that can be filled with liquid (e.g., water) to simulate the distortion of a surgical site.
  • the calibration images 802, 804, and 808 show the planar target 808, the laser line projection 814, and the fiducial markers 810 and/or 816 for different poses of the laser projector 812.
  • Both the camera (not shown) and the laser projector 812 can be moved to enable the acquisition of a set of calibration images (a calibration set, calibration data set, etc.) with multiple, distinct poses.
  • the laser line projection 814 is detected as described above in FIGS. 7A, 7B, 7C, and 7D and 3D points are reconstructed in camera coordinates (e.g., by intersecting back- projection rays with the planar target 808 to obtain a line of intersection between the planar target 808 and a plane of the laser projector 812).
  • the reconstructed 3D points are then transformed to laser coordinates using the tracked posed of the laser projector 812 to obtain a 3D line corresponding to the laser line projection 814.
  • a set of 3D lines is obtained.
  • the set of 3D lines is provided to a plane fitting algorithm (e.g., a random sample consensus (RANSAC)-based plane fitting algorithm).
  • a plane fitting algorithm e.g., a random sample consensus (RANSAC)-based plane fitting algorithm.
  • FIG. 9 shows an example 3D point reconstruction process 900 based on optical triangulation techniques.
  • structured light systems project a light pattern onto a scene and using a camera to detect the light pattern.
  • the intersection of a light plane transformed to the camera coordinate system with the camera ray yields 3D coordinates of respective points.
  • a 3D contour in camera coordinates can be obtained.
  • the points can then be transformed to a reference frame of a fiducial marker 902 attached to patient anatomy. By repeating this process for a set of frames, reconstruction of a denser point cloud representing the anatomical structures can be achieved.
  • a laser projector (not shown) emits/projects a light plane L onto a scene (e.g., onto patient anatomy within a surgical site, such as a surface of a femur 904).
  • the light plane L intersects the scene in a 3D curve 908 (e.g., corresponding to a curve/contour of the surface of the femur 904).
  • the curve is imaged/captured (e.g., by an arthroscopic camera) as indicated by the 2D contour 912 on an image plane 916 of the captured image.
  • a back-projection ray (e.g., as indicated by a line 920) that passes through both the pixel and an optical center C of the camera is created.
  • a 3D point X corresponding to a point on the curve 908 can then be reconstructed by intersecting the ray 920 with the light plane L.
  • a pre-operative femur bone and cartilage model can be registered with intra-operative data.
  • anatomical structures e.g., the proximal tibia, the anterior and posterior cruciate ligaments, the meniscus, etc.
  • capturing arthroscopic footage containing solely the structures of interest may be difficult or impossible.
  • These additional structures are not contained in the pre-operative model, and, therefore, it is desirable to remove these additional structures from the reconstructed point set obtained with the structured light system described herein.
  • systems and methods according to the present disclosure may implement automatic segmentation modeling techniques for arthroscopic videos/video data.
  • automatic segmentation may be performed using a convolutional neural network architecture, such as a ll-Net architecture.
  • a convolutional neural network architecture such as a ll-Net architecture.
  • an augmentation technique to enhance the performance of automatic arthroscopic image segmentation may be implemented by synthesizing laser projections in a plurality of images in a training dataset.
  • a new image containing synthetic laser projection is generated.
  • a random pixel within a region-of-interest is initially selected.
  • a forward projection ray that passes through the pixel is intersected with the 3D model, yielding a 3D point.
  • a random 3D vector is chosen that, together with the 3D point, defines a 3D plane.
  • This 3D plane corresponds to the synthetic laser plane of light and is subsequently intersected with the 3D model, yielding a 3D contour.
  • a synthetic laser projection can be generated in the image.
  • Laser light dispersion can be simulated by applying a Gaussian intensity distribution centered at the back-projected 2D contour.
  • FIGS. 10A, 10B, 10C, and 10D show example images 1000, 1004, 1008, and 1012 with respective synthetic laser projections 1016, 1020, 1024, and 1028 obtained as described above.
  • the images 1000, 1004, 1008, and 1012 correspond to real (i.e., captured) arthroscopic images obtained without any laser projection and modified to include the synthetic laser projections 1016, 1020, 1024, and 1028.
  • the images 1000, 1004, 1008, and 1012 can then be used to train a model, such as a semantic segmentation model, that is subsequently used to perform techniques described herein based on actual laser projections on an anatomical surface.
  • FIG. 11 shows an example method 1100 for performing touchless registration techniques using a projected laser contour in accordance with the principles of the present disclosure.
  • the method 1100 may be performed by one or more processing devices or processors, computing devices, etc., such as the system 100 or another computing device executing instructions stored in memory.
  • One or more steps of the method 1100 may be omitted in various examples, and/or may be performed in a different sequence than shown in FIG. 11 .
  • the steps may be performed sequentially or non-sequentially, two or more steps may be performed concurrently, etc.
  • One or more of the steps may be analogous to the touch-based registration described herein, such as described with respect to FIG. 5.
  • the method 1100 includes obtaining a three-dimensional bone model.
  • the three-dimensional bone model may be created by segmenting a plurality of non-invasive images (e.g., CT, MRI) taken preoperatively or intraoperatively. With the bone segmented from or within the images, the three- dimensional bone model may be created.
  • the three-dimensional bone may take any suitable form, such as a computer-aided design (CAD) model, a point cloud of data points with respect to an arbitrary origin, or a parametric representation of a surface expressed using analytical mathematical equations.
  • CAD computer-aided design
  • the method 1100 includes obtaining images (e.g., real-time or near real-time images) of a surgical environment including patient anatomy with (i) one or more visual markers (e.g., a bone fiducial marker, which may be referred to as a base marker) fixed to patient anatomy, (ii) a laser contour line or projection (e.g., as projected onto the patient anatomy using a laser projector as described herein, and (iii) one or more visual markers located on the laser projector).
  • images e.g., real-time or near real-time images
  • images e.g., real-time or near real-time images
  • images e.g., real-time or near real-time images
  • one or more visual markers e.g., a bone fiducial marker, which may be referred to as a base marker
  • a laser contour line or projection e.g., as projected onto the patient anatomy using a laser projector as described herein, and (iii) one or more visual markers located
  • the laser projection is a single line (e.g., a line formed from the projection of a light plane projected/emitted from the laser projector such that an intersection of the light plane with the anatomical surface forms a line that is curved/contoured in accordance with the contour of the anatomical surface).
  • Obtaining the images may include obtaining images using an endoscopic/arthroscopic camera or other imaging device configured to provide an image feed.
  • the laser projector can be calibrated (e.g., pre-operatively) as described above with respect to FIGS. 8A, 8B, and 8C.
  • the method 1100 includes detecting the laser projection in the obtained images.
  • Detecting the laser projection may include various techniques, such as the laser contour detection process described above in FIGS. 7A, 7B, 70, and 7D.
  • detecting the laser projection may include (i) removing distortion from an original input image that includes the laser projection to obtain an undistorted input image, (ii) subtracting a green (or other color corresponding to a color of the laser projection) channel representation of the undistorted image from a grayscale representation of the undistorted image to obtain a subtraction result, (iii) binarizing the subtraction result to obtain a binary image that includes a segmentation of the laser projection, and (iv) performing contour detection on the segmentation to identify a line corresponding to the projected laser.
  • the method 1100 includes reconstructing 3D points of surfaces (i.e. , anatomical surfaces) in the images obtained by the arthroscope using detected laser projections (e.g., lines/contoured) projected and captured within the images.
  • the points are reconstructed using optical triangulation techniques such as described in FIG. 9.
  • points corresponding to each of the identified lines are determined to reconstruct a point cloud (a set of points) representing the anatomical surface.
  • the method 1100 includes performing one or more functions based on the reconstructed point cloud.
  • the method 1100 may include registering the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points. Images obtained from the image feed can then be aligned with the pre-operative model using the reconstructed set of points.
  • laser projector may not include the visual/fiducial markers as described above.
  • 6D object post estimation algorithms can instead be used to infer the pose of the laser projector as required for 3D triangulation.
  • the visual/fiducial marker attached to the patient anatomy can be omitted by inferring the 6D pose of the anatomical model at every frame-time instant.
  • 6D pose estimation of the anatomical model can be achieved by performing single or multi-view depth estimation of regions corresponding to the target anatomical model or reconstructing depth data using the laser projector for a single frame and registering the depth data inferred intra-operatively with the preoperative 3D target anatomical model using 3D registration techniques.
  • the contour of the laser projector can be detected using data driven techniques, such as deep learning techniques.
  • the laser projector can project patterns other than the light plane described herein, such as two or more light points, light circles, etc.
  • the laser projector can project light of any wavelength as long as the wavelength can be sensed by an arthroscopic camera.
  • the laser projector can be attached to other surgical instruments.
  • the laser projector can be used to measure a distance between any two selected points in the reconstructed line/contour.
  • a visual marker attached to the anatomy is not required. Registration with the anatomical model is also not required.
  • FIG. 12 shows an example computer system or computing device 1200 configured to implement the various systems and methods of the present disclosure.
  • the computer system 1200 may correspond to one or more computing devices of the system 100, the surgical controller 118, a device that creates a patientspecific instrument, a tablet device within the surgical room, or any other system that implements any or all the various methods discussed in this specification.
  • the computer system 1200 may be configured to implement all or portions of the method 1100.
  • the computer system 1200 may be connected (e.g., networked) to other computer systems in a local-area network (LAN), an intranet, and/or an extranet (e.g., device cart 102 network), or at certain times the Internet (e.g., when not in use in a surgical procedure).
  • the computer system 1200 may be a server, a personal computer (PC), a tablet computer or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device.
  • PC personal computer
  • tablet computer any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device.
  • the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods discussed herein.
  • the computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 1206 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 1208, which communicate with each other via a bus 1210.
  • main memory 1204 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • static memory 1206 e.g., flash memory, static random access memory (SRAM)
  • SRAM static random access memory
  • Processing device 1202 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 1202 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets.
  • the processing device 1202 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • the processing device 1202 is configured to execute instructions for performing any of the operations and steps discussed herein. Once programmed with specific instructions, the processing device 1202, and thus the entire computer system 1200, becomes a special-purpose device, such as the surgical controller 118.
  • the computer system 1200 may further include a network interface device 1212 for communicating with any suitable network (e.g., the device cart 102 network).
  • the computer system 1200 also may include a video display 1214 (e.g., the display device 114), one or more input devices 1216 (e.g., a microphone, a keyboard, and/or a mouse), and one or more speakers 1218.
  • the video display 1214 and the input device(s) 1216 may be combined into a single component or device (e.g., an LCD touch screen).
  • the data storage device 1208 may include a computer-readable storage medium 1220 on which the instructions 1222 (e.g., implementing any methods and any functions performed by any device and/or component depicted described herein) embodying any one or more of the methodologies or functions described herein is stored.
  • the instructions 1222 may also reside, completely or at least partially, within the main memory 1204 and/or within the processing device 1202 during execution thereof by the computer system 1200. As such, the main memory 1204 and the processing device 1202 also constitute computer-readable media.
  • the instructions 1222 may further be transmitted or received over a network via the network interface device 1212.
  • While the computer-readable storage medium 1220 is shown in the illustrative examples to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
  • the term “computer- readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.

Landscapes

  • Image Processing (AREA)

Abstract

Un système pour effectuer un enregistrement sans contact d'anatomie de patient comprend une mémoire stockant des instructions et un ou plusieurs dispositifs de traitement configurés pour exécuter les instructions. L'exécution des instructions amène le système à obtenir, à partir d'une caméra, une image de l'anatomie du patient, l'image comprenant une projection laser qui est projetée sur l'anatomie du patient à l'aide d'un projecteur laser, à détecter la projection laser dans l'image obtenue à partir de la caméra, à obtenir un ensemble tridimensionnel de points correspondant à la projection laser détectée, et à enregistrer l'anatomie du patient en stockant des données corrélant l'anatomie du patient à l'ensemble tridimensionnel de points.
PCT/US2025/029515 2024-05-31 2025-05-15 Lumière structurée pour enregistrement 3d sans contact dans une navigation chirurgicale à base de vidéo Pending WO2025250376A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463654169P 2024-05-31 2024-05-31
US63/654,169 2024-05-31

Publications (1)

Publication Number Publication Date
WO2025250376A1 true WO2025250376A1 (fr) 2025-12-04

Family

ID=97871469

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/029515 Pending WO2025250376A1 (fr) 2024-05-31 2025-05-15 Lumière structurée pour enregistrement 3d sans contact dans une navigation chirurgicale à base de vidéo

Country Status (1)

Country Link
WO (1) WO2025250376A1 (fr)

Similar Documents

Publication Publication Date Title
US20230355312A1 (en) Method and system for computer guided surgery
EP3273854B1 (fr) Systèmes pour chirurgie assistée par ordinateur au moyen d'une vidéo intra-opératoire acquise par une caméra à mouvement libre
US12475586B2 (en) Systems and methods for generating three-dimensional measurements using endoscopic video data
CN104519822B (zh) 软组织切割器械及使用方法
US20130211232A1 (en) Arthroscopic Surgical Planning and Execution with 3D Imaging
US20230190136A1 (en) Systems and methods for computer-assisted shape measurements in video
WO2011086431A1 (fr) Enregistrement et navigation basés sur l'intégration d'images pour chirurgie endoscopique
CN115607286B (zh) 基于双目标定的膝关节置换手术导航方法、系统及设备
WO2024211015A1 (fr) Procédés et systèmes d'enregistrement d'un modèle osseux tridimensionnel
US20250032189A1 (en) Methods and systems for generating 3d models of existing bone tunnels for surgical planning
US20240398481A1 (en) Bone reamer video based navigation
WO2025250376A1 (fr) Lumière structurée pour enregistrement 3d sans contact dans une navigation chirurgicale à base de vidéo
US20250322514A1 (en) Automatic surgical marker motion detection using scene representations for view synthesis
US20250204991A1 (en) Smart, video-based joint distractor positioning system
WO2025240248A1 (fr) Suivi de fraise pour procédures de navigation chirurgicale
US20250169890A1 (en) Systems and methods for point and tool activation
US20250031942A1 (en) Methods and systems for intraoperatively selecting and displaying cross-sectional images
AU2022401872B2 (en) Bone reamer video based navigation
US20240197410A1 (en) Systems and methods for guiding drilled hole placement in endoscopic procedures
US20250049448A1 (en) Tunnel drilling aimer and iso-angle user interface
US20250143799A1 (en) Methods and systems for calibrating surgical instruments for surgical navigation guidance
WO2024220671A2 (fr) Sonde de détection de tissu pour enregistrements plus rapides et plus précis
WO2025019559A2 (fr) Procédés et systèmes d'enregistrement de systèmes de coordonnées internes et externes pour guidage chirurgical
WO2025071739A1 (fr) Système et procédé d'utilisation d'apprentissage automatique pour fournir un guidage de navigation et des recommandations associées à une chirurgie de révision
Stindel et al. Bone morphing: 3D reconstruction without pre-or intra-operative imaging