WO2025250376A1 - Structured light for touchless 3d registration in video-based surgical navigation - Google Patents
Structured light for touchless 3d registration in video-based surgical navigationInfo
- Publication number
- WO2025250376A1 WO2025250376A1 PCT/US2025/029515 US2025029515W WO2025250376A1 WO 2025250376 A1 WO2025250376 A1 WO 2025250376A1 US 2025029515 W US2025029515 W US 2025029515W WO 2025250376 A1 WO2025250376 A1 WO 2025250376A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- laser projection
- laser
- image
- camera
- patient anatomy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
A system for performing touchless registration of patient anatomy includes memory storing instructions and one or more processing devices configured to execute the instructions. Executing the instructions causes the system to obtain, from a camera, an image of the patient anatomy, the image including a laser projection that is projected onto the patient anatomy using a laser projector, detect the laser projection in the image obtained from the camera, obtain a three-dimensional set of points corresponding to the detected laser projection, and register the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points.
Description
STRUCTURED LIGHT FOR TOUCHLESS 3D REGISTRATION IN VIDEO-BASED SURGICAL NAVIGATION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional App. 63/654,169 filed May 31 , 2024, the entire contents of which are incorporated herein by reference.
FIELD
[0002] The present disclosure relates surgical navigation systems and methods, and more particularly to touchless registration techniques for surgical navigation systems.
BACKGROUND
[0003] The background description provided here is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
[0004] Arthroscopic surgical procedures are minimally invasive surgical procedures in which access to the surgical site within the body is by way of small keyholes or ports through the patient’s skin. The various tissues within the surgical site are visualized by way of an arthroscope placed through a port or portal, and the internal scene is shown on an external display device. The tissue may be repaired or replaced through the same or additional ports. In computer-assisted surgical procedures (e.g., surgical procedures associated with a knee or knee joint, surgical procedures associated with a hip or hip joint, etc.), the location of various objects with the surgical site may be tracked relative to the bone by way of images captured by an arthroscope and a three- dimensional model of the bone.
SUMMARY
[0005] A system for performing touchless registration of patient anatomy includes memory storing instructions and one or more processing devices configured to execute the instructions. Executing the instructions causes the system to obtain, from a camera, an image of the patient anatomy, the image including a laser projection that is projected onto the patient anatomy using a laser projector, detect the laser projection in the image obtained from the camera, obtain a three-dimensional set of points corresponding to the detected laser projection, and register the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points.
[0006] In other features, detecting the laser projection includes determining a contour of the laser projection based on position information associated with the laser projector. Executing the instructions further causes the system to track visual markers on the laser projector to obtain the position information associated with the laser projector. Determining the contour includes determining a position of the contour relative to the patient anatomy based on the position information associated with the laser projector. Obtaining the three-dimensional set of points includes segmenting the image, identifying regions of interest within the segmented image, and reconstructing the laser projection within the regions of interest. Obtaining the three-dimensional set of points includes, for each point in the set of points within the laser projection, determining three-dimensional coordinates of the point based on a plane of the laser projection that is transformed to a coordinate system associated with the camera.
[0007] In other features, the laser projection is a single line. The laser projection corresponds to a line formed by an intersection of a light plane projected by the laser projector and a surface of the patient anatomy. Detecting the laser projection includes removing distortion from the image to obtain an undistorted image. Detecting the laser projection includes obtaining a color-channel representation of the image based on a color of the laser projection, obtaining a greyscale representation of the image, and obtaining a segmentation of the laser projection based on a difference between the color-channel representation and the greyscale representation. Detecting the laser projection includes obtaining a line segment corresponding to the laser projection based on the segmentation of the laser projection. Obtaining the three-dimensional set of
points includes performing optical triangulation on the detected laser projection. The camera is an arthroscopic camera.
[0008] A method for performing touchless registration of patient anatomy includes, using one or more processors to execute instructions stored in memory, obtaining, from a camera, an image of the patient anatomy, the image including a laser projection that is projected onto the patient anatomy using a laser projector, detecting the laser projection in the image obtained from the camera, obtaining a three-dimensional set of points corresponding to the detected laser projection, and registering the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points.
[0009] In other features, detecting the laser projection includes determining a contour of the laser projection based on position information associated with the laser projector, and determining the contour includes determining a position of the contour relative to the patient anatomy based on the position information associated with the laser projector. The method further includes tracking visual markers on the laser projector to obtain the position information associated with the laser projector. Obtaining the three- dimensional set of points includes at least one of segmenting the image, identifying regions of interest within the segmented image, and reconstructing the laser projection within the regions of interest, for each point in the set of points within the laser projection, determining three-dimensional coordinates of the point based on a plane of the laser projection that is transformed to a coordinate system associated with the camera, and performing optical triangulation on the detected laser projection.
[0010] In other features, the laser projection is a single line formed by an intersection of (i) a light plane projected by the laser projector and (ii) a surface of the patient anatomy. In other features, detecting the laser projection includes removing distortion from the image to obtain an undistorted image. In other features, detecting the laser projection includes obtaining a color-channel representation of the image based on a color of the laser projection, obtaining a greyscale representation of the image, obtaining a segmentation of the laser projection based on a difference between the color-channel representation and the greyscale representation, and obtaining a line segment corresponding to the laser projection based on the segmentation of the laser projection. The camera is an arthroscopic camera.
[0011] Further areas of applicability of the present disclosure will become apparent from the detailed description, the claims and the drawings. The detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] For a detailed description of example embodiments, reference will now be made to the accompanying drawings in which:
[0013] FIG. 1 shows a surgical system in accordance with at least some embodiments;
[0014] FIG. 2 shows a conceptual drawing of a surgical site with various objects within the surgical site tracked, in accordance with at least some embodiments;
[0015] FIG. 3 shows a method in accordance with at least some embodiments;
[0016] FIG. 4 is an example video display showing portions of a femur and a bone fiducial during a registration procedure, in accordance with at least some embodiments;
[0017] FIG. 5 shows a method in accordance with at least some embodiments;
[0018] FIGS. 6A and 6B show an example laser projector 60 and arthroscope configured in accordance with at least some embodiments;
[0019] FIGS. 7A, 7B, 7C, and 7D show an example laser contour detection process in accordance with at least some embodiments;
[0020] FIGS. 8A, 8B, and 8C show an example calibration process in accordance with at least some embodiments;
[0021] FIG. 9 shows an example 3D reconstruction process based on optical triangulation techniques in accordance with at least some embodiments;
[0022] FIGS. 10A, 10B, 10C, and 10D show example images that include synthetic laser projections in accordance with at least some embodiments;
[0023] FIG. 11 shows an example method for performing touchless registration techniques in accordance with at least some embodiments; and
[0024] FIG. 12 shows an example computer system or computing device configured to implement the various systems and methods of the present disclosure.
[0025] In the drawings, reference numbers may be reused to identify similar and/or identical elements.
DEFINITIONS
[0026] Various terms are used to refer to particular system components. Different companies may refer to a component by different names - this document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to... .” Also, the term “couple” or “couples” is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections.
[0027] Similarly, spatial and functional relationships between elements (for example, between device, modules, circuit elements, etc.) are described using various terms, including “connected,” “engaged,” “coupled,” “adjacent,” “next to,” “on top of,” “above,” “below,” and “disposed.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the above disclosure, that relationship can be a direct relationship where no other intervening elements are present between the first and second elements, but can also be an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. Nevertheless, this paragraph shall serve as antecedent basis in the claims for referencing any electrical connection as “directly coupled” for electrical connections shown in the drawing with no intervening element(s).
[0028] Terms of degree, such as “substantially” or “approximately,” are understood by those skilled in the art to refer to reasonable ranges around and including the given value and ranges outside the given value, for example, general tolerances associated with manufacturing, assembly, and use of the embodiments. The term “substantially,” when referring to a structure or characteristic, includes the characteristic that is mostly or entirely present in the characteristic or structure. As one example, numerical values that are described as “approximate” or “approximately” as used herein may refer to a value within +/- 5% of the stated value.
[0029] “A”, “an”, and “the” as used herein refers to both singular and plural referents unless the context clearly dictates otherwise. By way of example, “a processor”
programmed to perform various functions refers to one processor programmed to perform each and every function, or more than one processor collectively programmed to perform each of the various functions. To be clear, an initial reference to “a [referent]”, and then a later reference for antecedent basis purposes to “the [referent]”, shall not obviate the fact the recited referent may be plural.
[0030] In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.”
[0031] The terms “input” and “output” when used as nouns refer to connections (e.g., electrical, software) and/or signals, and shall not be read as verbs requiring action. For example, a timer circuit may define a clock output. The example timer circuit may create or drive a clock signal on the clock output. In systems implemented directly in hardware (e.g., on a semiconductor substrate), these “inputs” and “outputs” define electrical connections and/or signals transmitted or received by those connections. In systems implemented in software, these “inputs” and “outputs” define parameters read by or written by, respectively, the instructions implementing the function. In examples where used in the context of user input, “input” may refer to actions of a user, interactions with input devices or interfaces by the user, etc.
[0032] “Controller,” “module,” or “circuitry” shall mean, alone or in combination, individual circuit components, an application specific integrated circuit (ASIC), a microcontroller with controlling software, a reduced-instruction-set computer (RISC) with controlling software, a digital signal processor (DSP), a processor with controlling software, a programmable logic device (PLD), a field programmable gate array (FPGA), or a programmable system-on-a-chip (PSOC), configured to read inputs and drive outputs responsive to the inputs.
[0033] As used to describe various surgical instruments or devices, such as a probe, the term “proximal” refers to a point or direction nearest a handle of the probe (e.g., a direction opposite the probe tip). Conversely, the term “distal” refers to a point or direction nearest the probe tip (e.g., a direction opposite the handle).
[0034] For the purposes of this disclosure, a non-transitory computer readable medium (or computer-readable storage medium/media) stores computer data, which data can include computer program code (or computer-executable instructions) that is executable by a computer, in machine-readable form. By way of example, and not limitation, a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals. Computer readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, optical storage, cloud storage, magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
[0035] For the purposes of this disclosure, the term “server” should be understood to refer to a service point that provides processing, database, and communication facilities. By way of example, and not limitation, the term “server” can refer to a single, physical processor with associated communications and data storage and database facilities, or it can refer to a networked or clustered complex of processors and associated network
and storage devices, as well as operating software and one or more database systems and application software that support the services provided by the server. Cloud servers are examples.
[0036] For the purposes of this disclosure, a “network” should be understood to refer to a network that may couple devices so that communications may be exchanged, such as between a server and a client device or other types of devices, including between wireless devices coupled via a wireless network, for example. A network may also include mass storage, such as network attached storage (NAS), a storage area network (SAN), a content delivery network (CDN) or other forms of computer or machine- readable media, for example. A network may include the Internet, one or more local area networks (LANs), one or more wide area networks (WANs), wire-line type connections, wireless type connections, cellular or any combination thereof. Likewise, sub-networks, which may employ differing architectures or may be compliant or compatible with differing protocols, may interoperate within a larger network.
[0037] For purposes of this disclosure, a “wireless network” should be understood to couple client devices with a network. A wireless network may employ stand-alone ad- hoc networks, mesh networks, Wireless LAN (WLAN) networks, cellular networks, or the like. A wireless network may further employ a plurality of network access technologies, including Wi-Fi, Long Term Evolution (LTE), WLAN, Wireless Router (WR) mesh, or 2nd, 3rd, 4th or 5th generation (2G, 3G, 4G or 5G) cellular technology, mobile edge computing (MEC), Bluetooth, 802.11 b/g/n, or the like. Network access technologies may enable wide area coverage for devices, such as client devices with varying degrees of mobility, for example. In short, a wireless network may include virtually any type of wireless communication mechanism by which signals may be communicated between devices, such as a client device or a computing device, between or within a network, or the like.
[0038] A computing device may be capable of sending or receiving signals, such as via a wired or wireless network, or may be capable of processing or storing signals, such as in memory as physical memory states, and may, therefore, operate as a server. Thus, devices capable of operating as a server may include, as examples, dedicated rackmounted servers, desktop computers, laptop computers, set top boxes, integrated
devices combining various features, such as two or more features of the foregoing devices, or the like.
[0039] For purposes of this disclosure, a client (or consumer or user) device, referred to as user equipment (UE)), may include a computing device capable of sending or receiving signals, such as via a wired or a wireless network. A client device may, for example, include a desktop computer or a portable device, such as a cellular telephone, a smart phone, a display pager, a radio frequency (RF) device, an infrared (IR) device a Near Field Communication (NFC) device, a Personal Digital Assistant (PDA), a handheld computer, a tablet computer, a phablet, a laptop computer, a set top box, a wearable computer, smart watch, an integrated or distributed device combining various features, such as features of the forgoing devices, or the like.
[0040] In some embodiments, as discussed below, the client device can also be, or can communicatively be coupled to, any type of known or to be known medical device (e.g., any type of Class I, II or III medical device), such as, but not limited to, a MRI machine, CT scanner, Electrocardiogram (ECG or EKG) device, photopletismograph (PPG), Doppler and transmit-time flow meter, laser Doppler, an endoscopic device neuromodulation device, a neurostimulation device, and the like, or some combination thereof.
DETAILED DESCRIPTION
[0041] The present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of non-limiting illustration, certain example embodiments. Subject matter may, however, be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any example embodiments set forth herein; example embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware or any combination thereof (other than software per se). The following detailed description is, therefore, not intended to be taken in a limiting sense.
[0042] Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of example embodiments in whole or in part.
[0043] The present disclosure is described below with reference to block diagrams and operational illustrations of methods and devices. It is understood that each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations, can be implemented by means of analog or digital hardware and computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer to alter its function as detailed herein, a special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the functions/acts specified in the block diagrams or operational block or blocks. In some alternate implementations, the functions/acts noted in the blocks can occur out of the order noted in the operational illustrations. For example, two blocks shown in succession can in fact be executed substantially concurrently or the blocks can sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0044] Computer-Aided Surgery (CAS) and surgical navigation systems support surgeons in planning and performing complex surgical procedures with increased precision and accuracy. As one example surgical procedure, arthroscopy is a minimally invasive medical procedure for diagnosing and treating joint problems. An orthopedic surgeon makes a small incision in the skin of the patient and inserts a lens into the incision. The lens is attached to a camera (e.g., an endoscopic camera) and coupled to a light source, allowing the joint to be visualized and treated. Surgical navigation and CAS systems have had significant impact in minimally invasive surgeries (MIS) such as arthroscopic procedures because the increased difficulty in visualizing the anatomy of the patient further complicates the surgical workflow.
[0045] Video-based surgical navigation (VBSN) according to the principles of the present disclosure use visual fiducials or markers (also called visual markers) attached to patient anatomy to guide the surgeon throughout the medical procedure. The videobased navigation process requires the precise registration of a pre-operative anatomical model with data acquired intra-operatively. The registration process requires the surgeon to digitize the surface of interest that corresponds to the preoperative model. The visual markers attached to the anatomies define reference frames to which the pre-operative model and the intra-operative acquired data are aligned.
[0046] In VBSN, fiducial markers with known visual patterns may be attached both to the targeted anatomy and to the instruments and subsequently tracked such that their relative poses can be accurately estimated (e.g., by applying 3D computer vision methods on the images/video acquired by a camera). These relative poses allow the instruments to be located with respect to the anatomy at every frame time instant. For example, VBSN facilitates the tracking of instruments with respect to the targeted anatomy to which a fiducial is rigidly attached (which may be referred to as a “base marker”).
[0047] In some examples, the registration process may be touch-based. Touchbased registration includes touching various points or “painting” patient anatomy using a touch probe to digitize anatomical surfaces. Such touch-based registration processes are time-consuming and complex. Accordingly, surgical navigation systems and methods of the present disclosure are directed to touchless registration techniques. For example, rather than relying on the digitizing of articular surfaces using a touch probe for generating intra-operative 3D point data, touchless registration techniques of the present disclosure include the use of a simple and affordable laser projector. For example, a laser projector is used to project a contour onto the anatomical surface, which is subsequently detected in a video feed provided by an arthroscopic camera using image processing techniques. By tracking visual markers attached to the laser projector at each frame-time instant, the 3D position of the identified laser contour in the reference frame of the camera can be determined. By simultaneously tracking the visual marker that is attached to patient anatomy, a reconstructed contour can be represented in the reference frame of the patient and
subsequently registered with the pre-operative model. Although described with respect to an arthroscopic camera, the principles of the present disclosure may be implemented using other types of cameras used for surgical procedures, such as other types of fiber optic cameras or other cameras separate from (i.e., not integrated with) the laser projector (endoscopic cameras, laparoscopic cameras, robotic surgery cameras, etc.).
[0048] FIG. 1 shows an example surgical system (e.g., a system including or implementing an arthroscopic video-based navigation system) 100 in accordance with at least some embodiments of the present disclosure. In particular, the example surgical system 100 comprises a tower or device cart 102 and various tools or instruments, such as an example mechanical resection instrument 104, an example plasma-based ablation instrument (hereafter just ablation instrument 106), and an endoscope in the example form of an arthroscope 108 and attached camera head or camera 110. In the example systems, the arthroscope 108 may be a rigid device, unlike endoscopes for other procedures, such as upper-endoscopies. The device cart 102 may comprise a display device 114, a resection controller 116, and a camera control unit (CCU) together with an endoscopic light source and video (e.g., a VBN) controller 118. In example cases the combined CCU and video controller 118 not only provides light to the arthroscope 108 and displays images received from the camera 110, but also implements various additional aspects, such as registering a three- dimensional bone model with the bone visible in the video images, and providing computer-assisted navigation during the surgery. Thus, the combined CCU and video controller are hereafter referred to as surgical controller 118. In other cases, however, the CCU and video controller may be a separate and distinct system from the controller that handles registration and computer-assisted navigation, yet the separate devices would nevertheless be operationally coupled.
[0049] The example device cart 102 further includes a pump controller 122 (e.g., single or dual peristaltic pump). Fluidic connections of the mechanical resection instrument 104 and ablation instrument 106 to the pump controller 122 are not shown so as not to unduly complicate the figure. Similarly, fluidic connections between the pump controller 122 and the patient are not shown so as not to unduly complicate the figure. In the example system, both the mechanical resection instrument 104 and the
ablation instrument 106 are coupled to the resection controller 116 being a dualfunction controller. In other cases, however, there may be a mechanical resection controller separate and distinct from an ablation controller. The example devices and controllers associated with the device cart 102 are merely examples, and other examples include vacuum pumps, patient-positioning systems, robotic arms holding various instruments, ultrasonic cutting devices and related controllers, patientpositioning controllers, and robotic surgical systems.
[0050] FIGS. 1 and 2 further show additional instruments that may be present during an arthroscopic surgical procedure. In particular, an example probe 124 (e.g., shown as a touch probe, but which may be a touchless probe in other examples), a drill guide or aimer 126, and a bone fiducial 128 are shown. The probe 124 may be used during the surgical procedure to provide information to the surgical controller 118, such as information to register a three-dimensional bone model to an underlying bone visible in images captured by the arthroscope 108 and camera head 110. In some surgical procedures, the aimer 126 may be used as a guide for placement and drilling with a drill wire to create an initial or pilot tunnel through the bone. The bone fiducial 128 may be affixed or rigidly attached to the bone and serve as an anchor location for the surgical controller 118 to know the position and orientation of the bone (e.g., after registration of a three-dimensional bone model). Additional tools and instruments may be present, such as the drill wire, various reamers for creating the throughbore and counterbore aspects of a tunnel through the bone, and various tools, such as for suturing and anchoring a graft. These additional tools and instruments are not shown so as not to further complicate the figure.
[0051] Example workflow for a surgical procedure is described below. While described with respect to an example anterior cruciate ligament repair procedure, the below techniques may also be performed for other types of surgical procedures, such as hip procedures or other procedures that include joint distraction. A surgical procedure may begin with a planning phase. An example procedure may start with imaging (e.g., X-ray imaging, computed tomography (CT), magnetic resonance imaging (MRI)) of the anatomy of the patient, including the relevant anatomy (e.g., for a knee procedure the lower portion of the femur, the upper portion of the tibia, and the articular cartilage; for a hip procedure, an upper portion of the femur, the
acetabulum/hip joint, pelvis, etc.). The imaging may be preoperative imaging, hours or days before the intraoperative repair, or the imaging may take place within the surgical setting just prior to the intraoperative repair. The discussion that follows assumes MRI imaging, but again many different types of imaging may be used. The image slices from the MRI imaging can be segmented such that a volumetric model or three-dimensional model of the anatomy is created. Any suitable currently available, or after developed, segmentation technology may be used to create the three-dimensional model. More specifically to the example of anterior cruciate ligament repair, a three-dimensional bone model of the lower portion of the femur, including the femoral condyles, is created. Conversely, for a hip procedure, a three- dimensional model of the upper portion of the femur and at least a portion of the pelvis (e.g., the acetabulum) is created.
[0052] Using the three-dimensional bone model, an operative plan is created. For a knee procedure, the results of the planning may include: a three-dimensional bone model of the distal end of the femur; a three-dimensional bone model for a proximal end of the tibia; an entry location and exit location through the femur and thus a planned-tunnel path for the femur; and an entry location and exit location through the tibia and thus a planned-tunnel path through the tibia. Other surgical parameters may also be selected during the planning, such as tunnel throughbore diameters, tunnel counterbore diameters and depth, desired post-repair flexion, and the like, but those additional surgical parameters are omitted so as not to unduly complicate the specification.
[0053] Conversely, for a hip procedure, the results of the planning may include a three-dimensional bone model of the proximal end of the femur; a three-dimensional bone model for at least a portion of the pelvis/hip joint (e.g., a region of the pelvis corresponding to the acetabulum); a surgical area of interest within the hip joint; and parameters associated with achieving an amount of distraction in the surgical area of interest to provide sufficient access to the surgical area of interest. For example, example hip procedures may include, but are not limited to, labral repair, femoroacetabular impingement (FAI) debridement (e.g., removal of bone spurs/growths), cartilage repair, and synovectomy (e.g., removal of inflamed tissue). These example procedures typically require access to a specific surgical area of
interest within the hip joint (i.e., in a specific area within an interface between the pelvis and the femoral head, such as an area around/surrounding a bone spur or growth, cartilage or tissue to be repaired or removed, etc.).
[0054] The intraoperative aspects include steps and procedures for setting up the surgical system to perform the various repairs. It is noted, however, that some of the intraoperative aspects (e.g., optical system calibration) may take place before any portals or incisions are made through the patient’s skin, and in fact before the patient is wheeled into the surgical room. Nevertheless, such steps and procedures may be considered intraoperative as they take place in the surgical setting and with the surgical equipment and instruments used to perform the actual repair.
[0055] An example procedure can be conducted arthroscopically and is computer- assisted in the sense that the surgical controller 118 is used for arthroscopic navigation within the surgical site. More particularly, in example systems the surgical controller 118 provides computer-assisted navigation during the procedure by tracking locations of various objects within the surgical site, such as the location of the bone within the three-dimensional coordinate space of the view of the arthroscope, and location of the various instruments within the three-dimensional coordinate space of the view of the arthroscope. A brief description of such tracking techniques is described below.
[0056] FIG. 2 shows a conceptual drawing of a surgical site with various objects (e.g., surgical instruments/tools) within the surgical site. In particular, visible in FIG. 2 is a distal end of the arthroscope 108, a portion of a bone 200 (e.g., femur), the bone fiducial 128 within the surgical site, and the probe 124.
[0057] The arthroscope 108 illuminates the surgical site with visible light. In the example of FIG. 2, the illumination is illustrated by arrows 208. The illumination provided to the surgical site is reflected by various objects and tissues within the surgical site, and the reflected light that returns to the distal end enters the arthroscope 108, propagates along an optical channel within the arthroscope 108, and is eventually incident upon a capture array within the camera 110 (FIG. 1 ). The images detected by the capture array within the camera 110 are sent electronically to the surgical controller 118 (FIG. 1 ) and displayed on the display device 114 (FIG. 1 ). In one example, the arthroscope 108 is monocular or has a single optical path through the arthroscope for capturing images of the surgical site, notwithstanding that the single
optical path may be constructed of two or more optical members (e.g., glass rods, optical fibers). That is to say, in example systems and methods the computer-assisted navigation provided by the arthroscope 108, the camera 110, and the surgical controller 118 is provided with the arthroscope 108 that is not a stereoscopic endoscope having two distinct optical paths separated by an interocular distance at the distal end endoscope.
[0058] During a surgical procedure, a surgeon selects an arthroscope with a viewing direction beneficial for the planned surgical procedure. Viewing direction refers to a line residing at the center of an angle subtended by the outside edges or peripheral edges of the view of an endoscope. The viewing direction for some arthroscopes is aligned with the longitudinal central axis of the arthroscope, and such arthroscopes are referred to as “zero degree” arthroscopes (e.g., the angle between the viewing direction and the longitudinal central axis of the arthroscope is zero degrees). The viewing direction of other arthroscopes forms a non-zero angle with the longitudinal central axis of the arthroscope. For example, for a 30° arthroscope the viewing direction forms a 30° angle to the longitudinal central axis of the arthroscope, the angle measured as an obtuse angle beyond the distal end of the arthroscope. In the example of FIG. 2, the view angle 210 of the arthroscope 108 forms a non-zero angle to the longitudinal central axis 212 of the arthroscope 108.
[0059] Still referring to FIG. 2, within the view of the arthroscope 108 is a portion of the bone 200 (in this example, within the intercondylar notch), along with the example bone fiducial 128, and the example probe 124. The example bone fiducial 128 is multifaceted element, with each face or facet having a fiducial disposed or created thereon. However, the bone fiducial need not have multiple faces, and in fact may take any shape so long as that shape can be tracked within the video images. The bone fiducial, such as bone fiducial 128, may be attached to the bone 200 in any suitable form (e.g., via the screw portion of the bone fiducial 128 visible in FIG. 1 ). The patterns of the fiducials on each facet are designed to provide information regarding the position and orientation of the bone fiducial 128 in the three-dimensional coordinate space of the view of the arthroscope 108. More particularly, the pattern is selected such that the position and orientation of the bone fiducial 128 may be determined from images captured by the arthroscope 108 and attached camera (FIG. 1 ).
[0060] The probe 124 is also shown as partially visible within the view of the arthroscope 108. The probe 124 may be used, as discussed more below, to identify a plurality of surface features on the bone 200 as part of the registration of the bone 200 to the three-dimensional bone model. In some cases the probe 124 and/or the aimer 126 may carry their own, unique fiducials, such that their respective poses may be calculated from the one or more fiducial present in the video stream. However, in other cases, and as shown, the medical instrument used to help with registration of the three-dimensional bone model, be it the probe 124, the aimer 126, or any other suitable medical device, may omit carrying fiducials. Stated otherwise, in such examples the medical instrument has no fiducial markers. In such cases, the pose of the medical instrument may be determined by a machine learning model, discussed in more detail below.
[0061] The images captured by the arthroscope 108 and attached camera are subject to optical distortion in many forms. For example, the visual field between distal end of the arthroscope 108 and the bone 200 within the surgical site is filled with fluid, such as bodily fluids and saline used to distend the joint. Many arthroscopes have one or more lenses at the distal end that widen the field of view, and the wider field of view causes a “fish eye” effect in the captured images. Further, the optical elements within the arthroscope (e.g., rod lenses) may have optical aberrations inherent to the manufacturing and/or assembly process. Further still, the camera may have various optical elements for focusing the images received onto the capture array, and the various optical elements may have aberrations inherent to the manufacturing and/or assembly process. In example systems, prior to use within each surgical procedure, the endoscopic optical system is calibrated to account for the various optical distortions. The calibration creates a characterization function that characterizes the optical distortion, and further analysis of the frames of the video stream may be, prior to further analysis, compensated using the characterization function.
[0062] The next example step in the intraoperative procedure is the registration of the bone model created during the planning stage. During the intraoperative repair, the three-dimensional bone model is obtained by or provided to the surgical controller 118. Again using the example of anterior cruciate ligament repair, and specifically computer-assisted navigation for tunnel paths through the femur, the three-
dimensional bone model of the lower portion of the femur is obtained by or provided to the surgical controller 118. Thus, the surgical controller 118 receives the three- dimensional bone model, and assuming the arthroscope 108 is inserted into the knee by way of a port or portal through the patient’s skin, the surgical controller 118 also receives video images of a portion of the lower end of the femur. In order to relate the three-dimensional bone model to the images received by way of the arthroscope 108 and camera 110, the surgical controller 118 registers the three-dimensional bone model to the images of the femur received by way of the arthroscope 108 and camera 110.
[0063] In order to perform the registration, and in accordance with example methods, the bone fiducial 128 is attached to the femur. The bone fiducial placement is such that the bone fiducial is within the field of view of the arthroscope 108. In examples for knee procedures, the bone fiducial 128 is placed within the intercondylar notch superior to the expected location of the tunnel through lateral condyle. Conversely, in examples for hip procedures, the bone fiducial 128 is placed on the femoral head. To relate or register bone visible in the video images to the three-dimensional bone model, the surgical controller 118 (FIG. 1 ) is provided or determines a plurality of surface features of an outer surface of the bone. Identifying the surface features may take several forms, including a touch-based registration using the probe 124 without a carried fiducial, a touchless registration technique in which the surface features are identified after resolving the motion of the arthroscope 108 and camera relative to the bone fiducial 128, and a third technique in which uses a patient-specific instrument.
[0064] In the example touch-based registration, the surgeon may touch a plurality of locations using the probe 124 (FIG. 1 ). In some cases, particularly when portions of the outer surface of the bone are exposed to view, receiving the plurality of surface features of the outer surface of the bone may involve the surgeon “painting” the outer surface of the bone. “Painting” is a term of art that does not involve application of color or pigment, but instead implies motion of the probe 124 when the distal end of the probe 124 is touching bone. In this example, the probe 124 does not carry or have a fiducial visible to the arthroscope 108 and the camera 110. It follows that the pose of the probe 124 and the location of the distal tip of the probe 124 needs to be determined
in order to gather the surface features for purposes of registering the three- dimensional bone model.
[0065] FIG. 3 shows a method 300 in accordance with at least some embodiments of the present disclosure. The example method 300 may be implemented in software within a computer system, such as the surgical controller 118. In particular, the example method 300 comprises obtaining a three-dimensional bone model (block 302). That is to say, in the example method 300, what is obtained is the three- dimensional bone model that may be created by segmenting a plurality of non-invasive images (e.g., CT, MRI) taken preoperatively or intraoperatively. With the bone segmented from or within the images, the three-dimensional bone model may be created. The three-dimensional bone may take any suitable form, such as a computer- aided design (CAD) model, a point cloud of data points with respect to an arbitrary origin, or a parametric representation of a surface expressed using analytical mathematical equations. Thus, the three-dimensional bone model is defined with respect to the origin and in any suitable an orthogonal basis.
[0066] The next step in the example method 300 is capturing video images of the bone fiducial attached to the bone (block 304). The capturing is performed intraoperatively. In an example, the capturing of video images is by way of the arthroscope 108 and camera 110. Other endoscopes may be used, such as endoscopes in which the capture array resides at the distal end of the device (e.g., chip-on-the-tip devices). However, in open procedures where the skin is cut and pulled away, exposing the bone to the open air, the capturing may be by any suitable camera device, such as one or both cameras of a stereoscopic camera system, or a portable computing device, such as a tablet or smart-phone device. The video images may be provided to the surgical controller 118 in any suitable form.
[0067] The next step in the example method 300 is determining locations of a distal tip of the medical instrument visible within the video images (block 306), where the distal tip is touching the bone in at least some of the frames of the video images, and the medical instrument does not have a fiducial. Determining the locations of the distal tip of the medical instrument may take any suitable form. In one example, determining the locations may include segmenting the medical instrument in the frames of the video images (block 308). The segmenting may take any suitable form, such as
applying the video images to a segmentation machine learning algorithm. The segmentation machine learning algorithm may take any suitable form, such as neural network or convolution neural network trained with a training data set showing the medical instrument in a plurality of known orientations. The segmentation machine learning algorithm may produce segmented video images where the medical instrument is identified or highlighted in some way (e.g., box, brightness increased, other objects removed).
[0068] With the segmented video images, the example method 300 may estimate a plurality of poses of the medical instrument within a respective plurality of frames of the video images (block 310). The estimating the poses may take any suitable form, such as applying the video images to a pose machine learning algorithm. The pose machine learning algorithm may take any suitable form, such as neural network or convolution neural network trained to perform six-dimensional pose estimation. The resultant of the pose machine learning algorithm may be, for at least some of the frames of the video image, an estimated pose of the medical instrument in the reference frame of the video images and/or in the reference frame provided by the bone fiducial. That is, the resultant of the pose machine learning algorithm may be a plurality of poses, one pose each for at least some of the frames of the segmented video images. While in many cases a pose may be determined for each frame, in other cases it may not be possible to make a pose estimation for at least some frame because of video quality issues, such as motion blur caused by electronic shutter operation.
[0069] The next step in the example method 300 is determining the locations based on the plurality of poses (block 312). In particular, for each frame for which a pose can be estimated, based on a model of the medical device the location of the distal tip can be determined in the reference frame of the video images and/or the bone fiducial. Thus, the resultant is a set of locations that, at least some of which, represent locations of the outer surface of the bone.
[0070] FIG. 3 shows an example three-step process for determining the locations of the distal tip of the medial instrument. However, the method 300 is merely an example, and many variations are possible. For example, a single machine learning model, such as a convolution neural network, may be set up and trained to perform all three
steps as a single overall process, though there may be many hidden layers of the convolution neural network. That is, the convolution neural network may segment the medical instrument, perform the six-dimensional pose estimation, and determine the location of the distal tip in each frame. The training data set in such a situation would include a data set in which each frame has the medical device segmented, the sixdimensional pose identified, and the location of the distal tip identified. The output of the determining step 306 may be a segmented video stream distinct from the video images captured at step 304. In such cases, the later method steps may use both segmented video stream and the video images to perform the further tasks. In other cases, the location information may be combined with the video images, such as being embedded in the video images, or added as metadata to each frame of the video images.
[0071] FIG. 4 is an example video display showing portions of a femur and a bone fiducial during a registration procedure. Although described with respect to a distal end of a femur, the principles and techniques described and shown in FIG. 4 can be applied to other anatomical structures/procedures, such as a femoral head for hip procedures as described herein. The display may be shown, for example, on the display device 114 associated with the device cart 102, or any other suitable location. In particular, visible in the main part of the display of FIG. 4 is an intercondylar notch 400, a portion of the lateral condyle 402, a portion the medial condyle 404, and the example bone fiducial 128. Shown in the upper right corner of the example display is a depiction of the bone, which may be a rendering 406 of the bone created from the three-dimensional bone model. Shown on the rendering 406 is a recommended area 408, the recommended area 408 being portions of the surface of the bone to be “painted” as part of the registration process. Shown in the lower right corner of the example display is a depiction of the bone, which again may be a rendering 412 of the bone created from the three-dimensional bone model. Shown on the rendering 412 are a plurality of surface features 416 on the bone model that have been identified as part of the registration process. Further shown in the lower right corner of the example display is progress indicator 418, showing the progress of providing and receiving of locations on the bone. The example progress indicator 418 is a horizontal bar having
a length that is proportional to the number of locations received, but any suitable graphic or numerical display showing progress may be used (e.g., 0% to 100%).
[0072] Referring to both the main display and the lower right rendering, as the surgeon touches the outer surface of the bone within the images captured by the arthroscope 108 and camera 110, the surgical controller 118 receives the surface features on the bone, and may display each location both within the main display as dots or locations 416, and within the rendering shown in the lower right corner. More specifically, the example surgical controller 118 overlays indications of identified surface features 416 on the display of the images captured by the arthroscope 108 and camera 110, and in the example case shown, also overlays indications of identified surface features 416 on the rendering 412 of the bone model. Moreover, as the number of identified locations 416 increases, the surgical controller 118 also updates the progress indicator 418.
[0073] Still referring to FIG. 4, in spite of the diligence of the surgeon, not all locations identified by the surgical controller 118 based on the surgeon’s movement of the probe 124 result in valid locations on the surface of the bone. In the example of FIG. 4, as the surgeon moves the probe 124 from the inside surface of the lateral condyle 102 to the inside surface of the medial condyle 104, the surgical controller 118, based on the example six-dimensional pose estimation, receives several locations 420 that likely represent locations at which the distal end of the probe 124 was not in contact with the bone.
[0074] With reference to FIG. 3, the plurality of surface features 416 may be, or the example surgical controller 118 may generate, a registration model relative to the bone fiducial 128 (block 314). The registration model may take any suitable form, such as a computer-aided design (CAD) model or point cloud of data points in any suitable orthogonal basis. The registration model, regardless of the form, may have fewer overall data points or less “structure” than the bone model created by the non-invasive computer imaging (e.g., MRI). However, the goal of the registration model is to provide the basis for the coordinate transforms and scaling used to correlate the bone model to the registration model and relative to the bone fiducial 128. Thus, the next step in the example method 300 is registering the bone model relative to the location of the bone fiducial based on the registration model (block 316). Registration may
conceptually involve testing a plurality of coordinate transformations and scaling values to find a correlation that has a sufficiently high correlation or confidence factor. Once a correlation is found with the sufficiently high confidence factor, the bone model is said to be registered to the location of the bone fiducial. Thereafter, the example registration method 300 may end (block 318); however, the surgical controller 118 may then use the registered bone model to provide computer-assisted navigation regarding a procedure involving the bone.
[0075] In the examples discussed to this point, registration of the bone model involves a touch-based registration technique using the probe 124 without a carried fiducial. However, other registration techniques are possible, such as a touchless registration technique. The example touchless registration technique again relies on placement of the bone fiducial 128. As before, when the viewing direction of the arthroscope 108 is relatively constant, the bone fiducial may have fewer faces with respective fiducials. Once placed, the bone fiducial 128 represents a fixed location on the outer surface of the bone in the view of the arthroscope 108, even as the position of the arthroscope 108 is moved and changed relative to the bone fiducial 128. Again, in order to relate or register the bone visible in the video images to the three- dimensional bone model, the surgical controller 118 (FIG. 1 ) determines a plurality of surface features of an outer surface of the bone, and in this example determining the plurality of surface features is based on a touchless registration technique in which the surface features are identified based on motion of the arthroscope 108 and camera 110 relative to the bone fiducial 128.
[0076] Another technique for registering the bone model to the bone uses a patientspecific instrument. In both touch-based and touchless registration techniques, a registration model is created, and the registration model is used to register the bone model to the bone visible in the video images. Conceptually, the registration model is used to determine a coordinate transformation and scaling to align the bone model to the actual bone. However, if the orientation of the bone in the video images is known or can be determined, use of the registration model may be omitted, and instead the coordinate transformations and scaling may be calculated directly.
[0077] FIG. 5 shows a method 500 in accordance with at least some embodiments. The example method may be implemented in software within one or more computer
systems, such as, in part, the surgical controller 118. In particular, the example method 500 comprises obtaining a three-dimensional bone model (block 502). In the patient-specific instrument registration technique, what is obtained is the three- dimensional bone model that may be created by segmenting a plurality of non-invasive images (e.g., MRI) taken preoperatively or intraoperatively.
[0078] The method 500 further includes generating a patient-specific instrument that has a feature designed to couple to the bone represented in the bone model in only one orientation (block 504). Generating the patient-specific instrument may first involve selecting a location at which the patient-specific instrument will attach. For example, a device or computer system may analyze the bone model and select the attachment location. In various examples, the attachment location may be a unique location in the sense that, if a patient-specific instrument is made to couple to the unique location, the patient-specific instrument will not couple to the bone at any other location. In the example case of an anterior cruciate ligament repair, the location selected may be at or near the upper or superior portion on the intercondylar notch. If the bone model shows another location with a unique feature, such as a bone spur or other raised or sunken surface anomaly, such a unique location may be selected as the attachment location for the patient-specific instrument. For example, for hip procedures, the location may be selected based on a location, within the hip joint, of a bone spur or other anatomical feature associated with the hip procedure.
[0079] Moreover, forming the patient-specific instrument may take any suitable form. In one example, a device or computer system may directly print, such as using a 3D printer, the patient-specific instrument. In other cases, the device or computer system may print a model of the attachment location, and the model may then become the mold for creating the patient-specific instrument. For example, the model may be the mold for an injection-molded plastic or casting technique. In some examples, the patient-specific instrument carries one or more fiducials, but as mentioned above, in other cases the patient-specific instrument may itself be tracked and thus carry no fiducials.
[0080] The method 500 further includes coupling the patient-specific instrument to the bone, in some cases the patient-specific instrument having the fiducial coupled to an exterior surface (block 506). As described above, the attachment location for the
patient-specific instrument can be selected to be unique such that the patient-specific instrument couples to the bone in only one location and in only one orientation. In the example case of an arthroscopic procedure, the patient-specific instrument may be inserted arthroscopically. That is, the attachment location may be selected such that a physical size of the patient-specific instrument enables insertion through the ports/portals in the patient’s skin. In other cases, the patient-specific instrument may be made or constructed of a flexible material that enables the patient-specific instrument to deform for insertion in the surgical site, yet return to the predetermined shape for coupling to the attachment location. However, in open procedures where the skin is cut and pulled away, exposing the bone to the open air, the patient-specific instrument may be a rigid device with fewer size restrictions.
[0081] The method 500 further includes capturing video images of the patientspecific instrument (block 508). Here again, the capturing may be performed intraoperatively. In the example case of an arthroscopic anterior cruciate ligament repair, the capturing of video images is by the surgical controller 118 by way of arthroscope 108 and camera 110. However, in open procedures where the skin is cut and pulled away, exposing the bone to the open air, the capturing may be by any suitable camera device, such as one or both cameras of a stereoscopic camera systems, or a portable computing device, such as a tablet or smart-phone device. In such cases, the video images may be provided to the surgical controller 118 in any suitable form.
[0082] The example method 500 further includes registering the bone model based on the location of the patient-specific instrument (block 510). That is, given that the patient-specific instrument couples to the bone at only one location and in only one orientation, the location and orientation of the patient-specific instrument is directly related to the location and origination of the bone, and thus the coordinate transformations and scaling for the registration may be calculated directly. Thereafter, the example method 500 may end; however, the surgical controller 118 may then use the registered bone model to provide computer-assisted navigation regarding a surgical task or surgical procedure involving the bone.
[0083] For example, with the registered bone model the surgical controller 118 may provide guidance regarding a surgical task of a surgical procedure. The specific
guidance is dependent upon the surgical procedure being performed and the stage of the surgical procedure. A non-exhaustive list of guidance comprises: changing a drill path entry point; changing a drill path exit point; aligning an aimer along a planned drill path; showing location at which to cut and/or resect the bone; reaming the bone by a certain depth along a certain direction; placing a device (suture, anchor or other) at a certain location; placing a suture at a certain location; placing an anchor at a certain location; showing regions of the bone to touch and/or avoid; and identifying regions and/or landmarks of the anatomy. In yet still other cases, the guidance may include highlighting within a version of the video images displayed on a display device, which can be the arthroscopic display or a see-through display, or by communicating to a virtual reality device or a robotic tool.
[0084] Surgical navigation systems and methods of the present disclosure use a laser projector and arthroscope to perform touchless registration techniques as described below in more detail. Rather than relying on the digitizing of the articular surface using a touch probe for generating intra-operative 3D point data, systems and methods described herein can be implemented with a simple and affordable laser projector (e.g., a laser scanner). The laser projector is used to project a contour onto the anatomical surface (which may be referred to as “structured light”). The projected contour is subsequently detected in the arthroscopic video using an image processing technique. By tracking the visual markers attached to the laser projector, the 3D position of the identified laser contour in the reference frame of the camera can be determined. The contour is reconstructed and represented in the reference frame of the patient and registered with the pre-operative model.
[0085] Data obtained using the systems and methods described herein can be used to perform touchless registration. “Touchless registration” (registration that does not require contact between a touch probe and an anatomical surface) is more efficient, eliminates the time-consuming process of physical digitization, and minimizes risk of bone or cartilage damage. Further, since there is no need for physical interaction with a specialized contact tool, the area accessible by the surgeon is increased. By providing a larger area for reconstruction, touchless registration facilitates femoral acetabular impingement (FAI) and other procedures where it is difficult to access the target anatomy.
[0086] Unlike manual digitization (i.e. , “touch-based” registration) where the surgeon only focuses on target surfaces contained in a pre-operative model, structured light techniques according to the present disclosure include projecting a contour independently of the surface of interest. Consequently, 3D points on surfaces that are not contained in the pre-operative 3D model may be reconstructed, resulting in a percentage of outliers prohibitively large for some registration algorithms to function properly. Accordingly, systems and methods according to the present disclosure include using a deep learning-based model that automatically segments the arthroscopic images and identifies the regions-of-interest for which points will be reconstructed.
[0087] As described below in more detail, the systems and methods described herein provide: a low-cost structured light (SL) system that includes a calibrated laser projector and a standard arthroscope and enables the detection of the projected light contour for inferring 3D point data intra-operatively and accomplishing the registration; a deep learning-based model designed for automatic segmentation of the arthroscopic video that identifies the regions of the arthroscopic images that correspond to structures in the pre-operative anatomical model, enhancing precision and efficiency in the registration process; and a data augmentation technique that improves the performance of the automatic arthroscopic image segmentation model by synthesizing the laser projection in the arthroscopic images.
[0088] For example, the laser projector includes one or more fiducial markers. The laser projector is pre-calibrated using the fiducial markers. Accordingly, a pose of the laser projector can be tracked (e.g., by the surgical navigation system as described herein). The laser projector is used to project a laser onto an anatomical surface shown in an arthroscopic image. In some examples, the image is processed to remove distortion. In other examples, removal of distortion can be omitted, performed in a subsequent step, etc. A subtraction operation is performed on the image (e.g., to subtract a particular color channel representation of the image from a grayscale representation of the image) to obtain a subtraction result and segmentation of the laser projection is performed on the subtraction result. A binary mask and various processing techniques are applied to obtain a contour line of the surface. For example,
3D points of the contour line can be reconstructed using techniques such as optical triangulation.
[0089] FIGS. 6A and 6B show an example laser projector 600 and arthroscope 604 configured in accordance with the principles of the present disclosure. For example, the laser projector 600 and arthroscope 604 are configured for use with a surgical navigation system, such as the system 100. The laser projector 600 includes one or more fiducial markers 608 (e.g., located on a distal tip or end 612 of the laser projector 600). In other words, the fiducial markers 608 may be located as closely as possible to a source/output of a laser contour projected from the laser projector 600, such as an output lens 616. Since a position of the arthroscope 604 is known/can be determined based on various techniques discussed above, a relative pose between the laser projector 600 and the arthroscope 604 is known at every frame-time instant by tracking the fiducial markers 608 using the surgical navigation techniques discussed herein. As one example, the laser projector 600 and the arthroscope 604 can be inserted through different portals providing access to the surgical site and can be moved independently of one another.
[0090] To allow the plane of light to be represented in the reference frame of the structured light system (e.g., the laser projector 600) based on detection/identification of the fiducial markers 608, the laser projector 604 can be calibrated pre-operatively (e.g., the equation of the plane of light in laser coordinates is estimated). Subsequent to calibration, arthroscopic footage (a feed of arthroscopic images) that includes the projected laser is obtained and the laser contour is detected using image processing techniques. To increase system robustness, a deep learning-based model is introduced to segment areas within the arthroscopic video that correspond to surfaces in the pre-operative 3D model (e.g., femur bone and cartilage surfaces). By reconstructing the laser projection within the segmented regions, a 3D point set can be recovered. This process filters outlier 3D points, enhancing the accuracy of the data. Resulting contours or curves represent the anatomical structures in the arthroscopic footage. Registration (e.g., curve-surface registration) can then be performed using the point cloud.
[0091] Example surgical navigation techniques using structured light in accordance with the principles of the present disclosure include detection of the laser
projection/line/contour in the arthroscopic image (described in more detail in FIGS. 7A, 7B, 70, and 7D), calibration of the plane of light of the laser projector (described in more detail in FIGS. 8A, 8B, and 80), and 3D point reconstruction through triangulation (described in more detail in FIG. 9).
[0092] An example laser contour detection process 700 is described in FIGS. 7A, 7B, 70, and 7D. As shown at 704 in FIG. 7A, distortion is removed from an original input image 706 (e.g., an image from an image feed obtained by an arthroscope). For example, the input image 706 includes a laser projection/contour 708 as projected upon a surface 710 by the laser projector. In one example, the laser contour 708 corresponds to a green laser projection. An undistorted image (i.e. , an image resulting from removing the distortion from the original input image 706) is shown at 712.
[0093] As shown at 716 in FIG. 7B, a green channel (or other color-channel) representation 720 of the undistorted image 712 is subtracted from a grayscale image 722 (a grayscale representation of the undistorted image 712) to obtain a subtraction result 724. The subtraction result 724 is binarized as shown at 728 to obtain a binary image 730, which shows a segmentation 732 of the laser projection in FIG. 7C. Since the laser projection as shown in FIG. 7A exhibits significant dispersion, the segmentation 732 of the laser projection/contour 708 appears as an amorphous blob or shape in FIG. 7C (as opposed to a defined line or other structure light shape). Contour detection is then performed as shown at 736 in FIG. 7D to identify a line 738 or other shape corresponding to the actual projection/contour 708. As one example, a principal component analysis (PCA)-based approach fits a contour to the segmentation 732 by (i) extracting a direction of maximum variance, (ii) sampling the segmentation 732 along the direction of maximum variance, and (iii) for each sample, determining a midpoint of a line segment contained in the segmentation 732 with perpendicular direction. A contour resulting from contour detection is shown as the line 738. Although described above with respect to a green contour and green channel processing/isolation, other colors may be used depending upon the color of the laser projection. Further, other suitable techniques may be used to identify the laser contour, line segment, etc.
[0094] Although described as being performed prior to the steps shown in FIGS. 7B, 7C, and 7D, removal of distortion can optionally be omitted, performed in a difference
sequence, etc. For example, the distortion can be removed subsequent to contour detection.
[0095] Although described with respect to projecting a plane of light that intersects patient anatomy to define a curve or contour as shown at 738, in some examples the laser projector can be configured to emit light having other types of patterns or geometries (e.g., sets of lines or contours, sets of points, a single point, etc.). In these examples, the steps described in FIGS. 7A, 7B, and 7C can be performed as described above, and contour detection as described in FIG. 7D can be configured to detect the relevant type of pattern (e.g., using suitable computer vision or machine learning techniques).
[0096] Prior to using the laser projector and contour detection techniques (e.g. pre- operatively), a calibration process 800 may be performed to calibrate the laser projector (estimate an equation of the plane of light in coordinates of the laser projector) using one or more calibration images 802, 804, and 806 as shown in FIGS. 8A, 8B, and 8C, respectively. For example, the calibration images 802, 804, and 806 may be captured/obtained using a camera (e.g., an arthroscope) or other imaging device during the calibration process 800. In one example, a planar target (e.g., a planar surface) 808 with one or more fiducial markers 810 (e.g., printed visual markers) is provided. In other examples, the planar target may include a predetermined pattern, such as a checkerboard pattern. A laser projector 812 is used to project a laser (e.g., laser line projection 814) onto the planar target 808. As shown, the laser projector 812 also includes one or more fiducial markers 816. In some examples, the planar target 808 is a bottom surface of a box or other container that can be filled with liquid (e.g., water) to simulate the distortion of a surgical site.
[0097] Accordingly, the calibration images 802, 804, and 808 show the planar target 808, the laser line projection 814, and the fiducial markers 810 and/or 816 for different poses of the laser projector 812. Both the camera (not shown) and the laser projector 812 can be moved to enable the acquisition of a set of calibration images (a calibration set, calibration data set, etc.) with multiple, distinct poses. For each calibration image, the laser line projection 814 is detected as described above in FIGS. 7A, 7B, 7C, and 7D and 3D points are reconstructed in camera coordinates (e.g., by intersecting back- projection rays with the planar target 808 to obtain a line of intersection between the
planar target 808 and a plane of the laser projector 812). The reconstructed 3D points are then transformed to laser coordinates using the tracked posed of the laser projector 812 to obtain a 3D line corresponding to the laser line projection 814. After all calibration images are processed, a set of 3D lines is obtained. In an example, the set of 3D lines is provided to a plane fitting algorithm (e.g., a random sample consensus (RANSAC)-based plane fitting algorithm).
[0098] FIG. 9 shows an example 3D point reconstruction process 900 based on optical triangulation techniques. In various examples, structured light systems project a light pattern onto a scene and using a camera to detect the light pattern. The intersection of a light plane transformed to the camera coordinate system with the camera ray yields 3D coordinates of respective points. Performing this process for all points within a detected laser contour, a 3D contour in camera coordinates can be obtained. The points can then be transformed to a reference frame of a fiducial marker 902 attached to patient anatomy. By repeating this process for a set of frames, reconstruction of a denser point cloud representing the anatomical structures can be achieved.
[0099] As shown in FIG. 9, a laser projector (not shown) emits/projects a light plane L onto a scene (e.g., onto patient anatomy within a surgical site, such as a surface of a femur 904). The light plane L intersects the scene in a 3D curve 908 (e.g., corresponding to a curve/contour of the surface of the femur 904). The curve is imaged/captured (e.g., by an arthroscopic camera) as indicated by the 2D contour 912 on an image plane 916 of the captured image. For each of a plurality of points (e.g., each pixel) of the 2D contour 912, a back-projection ray (e.g., as indicated by a line 920) that passes through both the pixel and an optical center C of the camera is created. A 3D point X corresponding to a point on the curve 908 can then be reconstructed by intersecting the ray 920 with the light plane L.
[0100] In the manner described above, a pre-operative femur bone and cartilage model can be registered with intra-operative data. However, due to the presence of other anatomical structures (e.g., the proximal tibia, the anterior and posterior cruciate ligaments, the meniscus, etc.), capturing arthroscopic footage containing solely the structures of interest may be difficult or impossible. These additional structures are not contained in the pre-operative model, and, therefore, it is desirable to remove these
additional structures from the reconstructed point set obtained with the structured light system described herein.
[0101] Accordingly, in some examples, systems and methods according to the present disclosure may implement automatic segmentation modeling techniques for arthroscopic videos/video data. As one example, automatic segmentation may be performed using a convolutional neural network architecture, such as a ll-Net architecture. Due to the lack of real arthroscopic data with laser projection, an augmentation technique to enhance the performance of automatic arthroscopic image segmentation may be implemented by synthesizing laser projections in a plurality of images in a training dataset.
[0102] For example, for each image in the training dataset, a new image containing synthetic laser projection is generated. Considering a corresponding ground-truth binary mask, a random pixel within a region-of-interest is initially selected. Using previously obtained registration data and the camera pose relative to the image, a forward projection ray that passes through the pixel is intersected with the 3D model, yielding a 3D point. Then, a random 3D vector is chosen that, together with the 3D point, defines a 3D plane. This 3D plane corresponds to the synthetic laser plane of light and is subsequently intersected with the 3D model, yielding a 3D contour. By back-projecting this contour onto the image, a synthetic laser projection can be generated in the image. Laser light dispersion can be simulated by applying a Gaussian intensity distribution centered at the back-projected 2D contour.
[0103] FIGS. 10A, 10B, 10C, and 10D show example images 1000, 1004, 1008, and 1012 with respective synthetic laser projections 1016, 1020, 1024, and 1028 obtained as described above. For example, the images 1000, 1004, 1008, and 1012 correspond to real (i.e., captured) arthroscopic images obtained without any laser projection and modified to include the synthetic laser projections 1016, 1020, 1024, and 1028. The images 1000, 1004, 1008, and 1012 can then be used to train a model, such as a semantic segmentation model, that is subsequently used to perform techniques described herein based on actual laser projections on an anatomical surface.
[0104] FIG. 11 shows an example method 1100 for performing touchless registration techniques using a projected laser contour in accordance with the principles of the
present disclosure. As described, the method 1100 may be performed by one or more processing devices or processors, computing devices, etc., such as the system 100 or another computing device executing instructions stored in memory. One or more steps of the method 1100 may be omitted in various examples, and/or may be performed in a different sequence than shown in FIG. 11 . The steps may be performed sequentially or non-sequentially, two or more steps may be performed concurrently, etc. One or more of the steps may be analogous to the touch-based registration described herein, such as described with respect to FIG. 5.
[0105] At 1104, the method 1100 includes obtaining a three-dimensional bone model. For example, the three-dimensional bone model may be created by segmenting a plurality of non-invasive images (e.g., CT, MRI) taken preoperatively or intraoperatively. With the bone segmented from or within the images, the three- dimensional bone model may be created. The three-dimensional bone may take any suitable form, such as a computer-aided design (CAD) model, a point cloud of data points with respect to an arbitrary origin, or a parametric representation of a surface expressed using analytical mathematical equations.
[0106] At 1108, the method 1100 includes obtaining images (e.g., real-time or near real-time images) of a surgical environment including patient anatomy with (i) one or more visual markers (e.g., a bone fiducial marker, which may be referred to as a base marker) fixed to patient anatomy, (ii) a laser contour line or projection (e.g., as projected onto the patient anatomy using a laser projector as described herein, and (iii) one or more visual markers located on the laser projector). In an example, the laser projection is a single line (e.g., a line formed from the projection of a light plane projected/emitted from the laser projector such that an intersection of the light plane with the anatomical surface forms a line that is curved/contoured in accordance with the contour of the anatomical surface). Obtaining the images may include obtaining images using an endoscopic/arthroscopic camera or other imaging device configured to provide an image feed. Although not described or shown as a separate step in FIG. 11 , the laser projector can be calibrated (e.g., pre-operatively) as described above with respect to FIGS. 8A, 8B, and 8C.
[0107] At 1112, the method 1100 includes detecting the laser projection in the obtained images. Detecting the laser projection may include various techniques, such
as the laser contour detection process described above in FIGS. 7A, 7B, 70, and 7D. For example, detecting the laser projection may include (i) removing distortion from an original input image that includes the laser projection to obtain an undistorted input image, (ii) subtracting a green (or other color corresponding to a color of the laser projection) channel representation of the undistorted image from a grayscale representation of the undistorted image to obtain a subtraction result, (iii) binarizing the subtraction result to obtain a binary image that includes a segmentation of the laser projection, and (iv) performing contour detection on the segmentation to identify a line corresponding to the projected laser.
[0108] At 1116, the method 1100 includes reconstructing 3D points of surfaces (i.e. , anatomical surfaces) in the images obtained by the arthroscope using detected laser projections (e.g., lines/contoured) projected and captured within the images. For example, the points are reconstructed using optical triangulation techniques such as described in FIG. 9. As one example, subsequent to detecting laser projections and identifying lines (corresponding to respective light planes) for a plurality of images as described above in step 1112, points corresponding to each of the identified lines are determined to reconstruct a point cloud (a set of points) representing the anatomical surface.
[0109] At 1120, the method 1100 includes performing one or more functions based on the reconstructed point cloud. For example, the method 1100 may include registering the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points. Images obtained from the image feed can then be aligned with the pre-operative model using the reconstructed set of points.
[0110] In some examples, laser projector may not include the visual/fiducial markers as described above. For example, 6D object post estimation algorithms can instead be used to infer the pose of the laser projector as required for 3D triangulation.
[0111] In some examples, the visual/fiducial marker attached to the patient anatomy (e.g., affixed to bone) can be omitted by inferring the 6D pose of the anatomical model at every frame-time instant. 6D pose estimation of the anatomical model can be achieved by performing single or multi-view depth estimation of regions corresponding to the target anatomical model or reconstructing depth data using the laser projector
for a single frame and registering the depth data inferred intra-operatively with the preoperative 3D target anatomical model using 3D registration techniques.
[0112] In some examples, the contour of the laser projector can be detected using data driven techniques, such as deep learning techniques.
[0113] In some examples, the laser projector can project patterns other than the light plane described herein, such as two or more light points, light circles, etc.
[0114] In various examples, the laser projector can project light of any wavelength as long as the wavelength can be sensed by an arthroscopic camera.
[0115] In some examples, the laser projector can be attached to other surgical instruments.
[0116] In some examples, the laser projector can be used to measure a distance between any two selected points in the reconstructed line/contour. In these examples, a visual marker attached to the anatomy is not required. Registration with the anatomical model is also not required.
[0117] FIG. 12 shows an example computer system or computing device 1200 configured to implement the various systems and methods of the present disclosure. In one example, the computer system 1200 may correspond to one or more computing devices of the system 100, the surgical controller 118, a device that creates a patientspecific instrument, a tablet device within the surgical room, or any other system that implements any or all the various methods discussed in this specification. For example, the computer system 1200 may be configured to implement all or portions of the method 1100. The computer system 1200 may be connected (e.g., networked) to other computer systems in a local-area network (LAN), an intranet, and/or an extranet (e.g., device cart 102 network), or at certain times the Internet (e.g., when not in use in a surgical procedure). The computer system 1200 may be a server, a personal computer (PC), a tablet computer or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, while only a single computer system is illustrated, the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods discussed herein.
[0118] The computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 1206 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 1208, which communicate with each other via a bus 1210.
[0119] Processing device 1202 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 1202 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 1202 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 1202 is configured to execute instructions for performing any of the operations and steps discussed herein. Once programmed with specific instructions, the processing device 1202, and thus the entire computer system 1200, becomes a special-purpose device, such as the surgical controller 118.
[0120] The computer system 1200 may further include a network interface device 1212 for communicating with any suitable network (e.g., the device cart 102 network). The computer system 1200 also may include a video display 1214 (e.g., the display device 114), one or more input devices 1216 (e.g., a microphone, a keyboard, and/or a mouse), and one or more speakers 1218. In one illustrative example, the video display 1214 and the input device(s) 1216 may be combined into a single component or device (e.g., an LCD touch screen).
[0121] The data storage device 1208 may include a computer-readable storage medium 1220 on which the instructions 1222 (e.g., implementing any methods and any functions performed by any device and/or component depicted described herein) embodying any one or more of the methodologies or functions described herein is stored. The instructions 1222 may also reside, completely or at least partially, within the main memory 1204 and/or within the processing device 1202 during execution thereof by the computer system 1200. As such, the main memory 1204 and the
processing device 1202 also constitute computer-readable media. In certain cases, the instructions 1222 may further be transmitted or received over a network via the network interface device 1212.
[0122] While the computer-readable storage medium 1220 is shown in the illustrative examples to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer- readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
[0123] The foregoing description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. The broad teachings of the disclosure can be implemented in a variety of forms. Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure. Further, although each of the embodiments is described above as having certain features, any one or more of those features described with respect to any embodiment of the disclosure can be implemented in and/or combined with features of any of the other embodiments, even if that combination is not explicitly described. In other words, the described embodiments are not mutually exclusive, and permutations of one or more embodiments with one another remain within the scope of this disclosure.
Claims
1 . A system for performing touchless registration of patient anatomy, the system comprising: memory storing instructions; and one or more processing devices configured to execute the instructions, wherein executing the instructions causes the system to obtain, from a camera, an image of the patient anatomy, wherein the image includes a laser projection that is projected onto the patient anatomy using a laser projector, detect the laser projection in the image obtained from the camera, obtain a three-dimensional set of points corresponding to the detected laser projection, and register the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points.
2. The system of claim 1 , wherein detecting the laser projection includes determining a contour of the laser projection based on position information associated with the laser projector.
3. The system of claim 2, wherein executing the instructions further causes the system to track visual markers on the laser projector to obtain the position information associated with the laser projector.
4. The system of claim 2, wherein determining the contour includes determining a position of the contour relative to the patient anatomy based on the position information associated with the laser projector.
5. The system of claim 1 , wherein obtaining the three-dimensional set of points includes segmenting the image, identifying regions of interest within the segmented image, and reconstructing the laser projection within the regions of interest.
6. The system of claim 1 , wherein obtaining the three-dimensional set of points includes, for each point in the set of points within the laser projection, determining three-dimensional coordinates of the point based on a plane of the laser projection that is transformed to a coordinate system associated with the camera.
7. The system of claim 1 , wherein the laser projection is a single line.
8. The system of claim 1 , wherein the laser projection corresponds to a line formed by an intersection of (i) a light plane projected by the laser projector and (ii) a surface of the patient anatomy.
9. The system of claim 1 , wherein detecting the laser projection includes removing distortion from the image to obtain an undistorted image.
10. The system of claim 9, wherein detecting the laser projection includes: obtaining a color-channel representation of the image based on a color of the laser projection; obtaining a greyscale representation of the image; and obtaining a segmentation of the laser projection based on a difference between the color-channel representation and the greyscale representation.
11. The system of claim 10, wherein detecting the laser projection includes obtaining a line segment corresponding to the laser projection based on the segmentation of the laser projection.
12. The system of claim 1 , wherein obtaining the three-dimensional set of points includes performing optical triangulation on the detected laser projection.
13. The system of claim 1 , wherein the camera is an arthroscopic camera.
14. A method for performing touchless registration of patient anatomy, the method comprising, using one or more processors to execute instructions stored in memory: obtaining, from a camera, an image of the patient anatomy, wherein the image includes a laser projection that is projected onto the patient anatomy using a laser projector; detecting the laser projection in the image obtained from the camera; obtaining a three-dimensional set of points corresponding to the detected laser projection; and registering the patient anatomy by storing data correlating the patient anatomy to the three-dimensional set of points.
15. The method of claim 14, wherein detecting the laser projection includes determining a contour of the laser projection based on position information associated with the laser projector, and wherein determining the contour includes determining a position of the contour relative to the patient anatomy based on the position information associated with the laser projector, the method further comprising tracking visual markers on the laser projector to obtain the position information associated with the laser projector.
16. The method of claim 14, wherein obtaining the three-dimensional set of points includes at least one of: segmenting the image, identifying regions of interest within the segmented image, and reconstructing the laser projection within the regions of interest; for each point in the set of points within the laser projection, determining three- dimensional coordinates of the point based on a plane of the laser projection that is transformed to a coordinate system associated with the camera; and performing optical triangulation on the detected laser projection.
17. The method of claim 14, wherein the laser projection is a single line formed by an intersection of (i) a light plane projected by the laser projector and (ii) a surface of the patient anatomy.
18. The method of claim 14, wherein detecting the laser projection includes removing distortion from the image to obtain an undistorted image.
19. The method of claim 18, wherein detecting the laser projection includes: obtaining a color-channel representation of the image based on a color of the laser projection; obtaining a greyscale representation of the image; obtaining a segmentation of the laser projection based on a difference between the color-channel representation and the greyscale representation; and obtaining a line segment corresponding to the laser projection based on the segmentation of the laser projection.
20. The method of claim 14, wherein the camera is an arthroscopic camera.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US63/654,169 | 2024-05-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025250376A1 true WO2025250376A1 (en) | 2025-12-04 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230355312A1 (en) | Method and system for computer guided surgery | |
| EP3273854B1 (en) | Systems for computer-aided surgery using intra-operative video acquired by a free moving camera | |
| US12475586B2 (en) | Systems and methods for generating three-dimensional measurements using endoscopic video data | |
| US20130211232A1 (en) | Arthroscopic Surgical Planning and Execution with 3D Imaging | |
| US20230190136A1 (en) | Systems and methods for computer-assisted shape measurements in video | |
| WO2011086431A1 (en) | Image integration based registration and navigation for endoscopic surgery | |
| CN104519822A (en) | Soft tissue cutting instrument and method of use | |
| CN115607286B (en) | Knee joint replacement surgery navigation method, system and equipment based on binocular calibration | |
| Liu et al. | Fusion of multimodality image and point cloud for spatial surface registration for knee arthroplasty | |
| US20250031942A1 (en) | Methods and systems for intraoperatively selecting and displaying cross-sectional images | |
| US20250032189A1 (en) | Methods and systems for generating 3d models of existing bone tunnels for surgical planning | |
| US20240398481A1 (en) | Bone reamer video based navigation | |
| WO2025250376A1 (en) | Structured light for touchless 3d registration in video-based surgical navigation | |
| US20250322514A1 (en) | Automatic surgical marker motion detection using scene representations for view synthesis | |
| US20250204991A1 (en) | Smart, video-based joint distractor positioning system | |
| WO2025240248A1 (en) | Burr tracking for surgical navigation procedures | |
| US20250169890A1 (en) | Systems and methods for point and tool activation | |
| WO2024211015A1 (en) | Methods and systems of registering a three-dimensional bone model | |
| US20240197410A1 (en) | Systems and methods for guiding drilled hole placement in endoscopic procedures | |
| US20250049448A1 (en) | Tunnel drilling aimer and iso-angle user interface | |
| US20250143799A1 (en) | Methods and systems for calibrating surgical instruments for surgical navigation guidance | |
| WO2024220671A2 (en) | Tissue sensing probe for faster, more accurate registrations | |
| WO2025019559A2 (en) | Methods and systems for registering internal and external coordinate systems for surgical guidance | |
| WO2025071739A1 (en) | System and method for using machine learning to provide navigation guidance and recommendations related to revision surgery | |
| Stindel et al. | Bone morphing: 3D reconstruction without pre-or intra-operative imaging |