[go: up one dir, main page]

CN116917942A - Registration of computed tomography to perspective using segmented input - Google Patents

Registration of computed tomography to perspective using segmented input Download PDF

Info

Publication number
CN116917942A
CN116917942A CN202280016344.4A CN202280016344A CN116917942A CN 116917942 A CN116917942 A CN 116917942A CN 202280016344 A CN202280016344 A CN 202280016344A CN 116917942 A CN116917942 A CN 116917942A
Authority
CN
China
Prior art keywords
anatomical
image
images
elements
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280016344.4A
Other languages
Chinese (zh)
Inventor
D·朱尼奥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazor Robotics Ltd
Original Assignee
Mazor Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/590,010 external-priority patent/US12190526B2/en
Application filed by Mazor Robotics Ltd filed Critical Mazor Robotics Ltd
Priority claimed from PCT/IL2022/050197 external-priority patent/WO2022180624A1/en
Publication of CN116917942A publication Critical patent/CN116917942A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A method according to an embodiment of the present disclosure includes: receiving a Computed Tomography (CT) image of a patient; segmenting a first set of anatomical elements from the CT image; receiving a plurality of fluoroscopic images of the patient; segmenting a second set of anatomical elements from the plurality of fluoroscopic images; and creating a registration between the CT image and the plurality of fluoroscopic images based on the segmented first set of anatomical elements and the segmented second set of anatomical elements.

Description

Registration of computed tomography to perspective using segmented input
Technical Field
The present technology relates generally to medical imaging, and more particularly to registration of medical images.
Background
Image guidance may be used for manual, robotic-assisted and/or fully automated surgery. At least some image-guided techniques require registration of a pre-operative image (which may be used to plan a procedure) with one or more intra-operative images (which may be used, for example, to determine the location of a patient within a navigational coordinate space or other relevant reference frame). Associating the pre-operative image with one or more intra-operative images by registration enables the precise anatomical location specified in one image (e.g., the pre-operative image) to be identified in another image (e.g., one of the one or more intra-operative images).
Disclosure of Invention
Exemplary aspects of the present disclosure include:
a method according to at least one embodiment of the present disclosure includes: receiving a Computed Tomography (CT) image of a patient; segmenting a first set of anatomical elements from the CT image; receiving a plurality of fluoroscopic images of the patient; segmenting a second set of anatomical elements from the plurality of fluoroscopic images; and creating a registration between the CT image and the plurality of fluoroscopic images based on the segmented first set of anatomical elements and the segmented second set of anatomical elements.
According to any of the aspects herein, wherein the segmenting the second set of anatomical elements further comprises determining that the first anatomical element overlaps the second anatomical element.
According to any of the aspects herein, wherein the first anatomical element is a vertebra and the second anatomical element is a rib.
According to any of the aspects herein, wherein the determining comprises detecting a gradient line within the boundary of the second anatomical element in at least one of the plurality of fluoroscopic images.
According to any of the aspects herein, the method further comprises subtracting a pixel corresponding to the first anatomical element from at least one of the plurality of fluoroscopic images.
According to any of the aspects herein, wherein the subtracting is based on information about an expected shape of at least one of the first anatomical element or the second anatomical element.
According to any of the aspects herein, wherein creating the registration comprises matching at least one first gradient corresponding to at least one anatomical element of the first set of anatomical elements with at least one second gradient corresponding to at least one anatomical element of the second set of anatomical elements.
According to any of the aspects herein, the method further comprises removing one or more gradient lines from at least one of the plurality of fluoroscopic images.
According to any of the aspects herein, wherein the first set of anatomical elements comprises at least one of patella or soft tissue anatomical elements.
A system according to at least one embodiment of the present disclosure includes: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: receiving a three-dimensional (3D) image of a patient anatomy; segmenting a first set of anatomical elements from the 3D image; causing an imaging device to capture one or more two-dimensional (2D) images of the patient anatomy; segmenting a second set of anatomical elements from the one or more 2D images; cleaning the one or more 2D images by removing at least one gradient line from each 2D image of the one or more 2D images; and registering the one or more cleaned 2D images to the 3D image based on the segmented first set of anatomical elements and the segmented second set of anatomical elements.
According to any of the aspects herein, wherein the segmenting comprises determining that the first anatomical element overlaps the second anatomical element.
According to any of the aspects herein, wherein the segmentation is based on information about an expected shape of at least one of the first anatomical element or the second anatomical element.
According to any of the aspects herein, wherein the at least one gradient line is located in an anatomical element of the second set of anatomical elements.
According to any of the aspects herein, wherein the segmentation of the segmented second set of anatomical elements further comprises defining a boundary around the at least one anatomical tissue.
According to any of the aspects herein, the system further comprises subtracting pixels corresponding to the at least one anatomical tissue from the segmented second set of anatomical features.
According to any of the aspects herein, wherein the boundary defines a region indicative of overlap between the at least one anatomical tissue and the segmented anatomical object of the second set of anatomical elements.
According to any of the aspects herein, wherein the segmenting further comprises identifying one or more gradient lines associated with each anatomical element of the first set of anatomical elements.
According to any of the aspects herein, wherein the 3D image and the one or more 2D images omit use of fiducials.
A system according to at least one embodiment of the present disclosure includes: a processor; an imaging device; and a memory storing instructions thereon that, when executed by the processor, cause the processor to: receiving a three-dimensional (3D) image; causing the imaging device to capture one or more two-dimensional (2D) images; segmenting a first set of anatomical elements from the 3D image; segmenting a second set of anatomical elements from each 2D image of the one or more 2D images, the segmenting comprising defining a boundary of the first anatomical object; removing the first anatomical object from at least one 2D image of the one or more 2D images to produce one or more cleaned 2D images; the one or more cleaned 2D images are registered to the 3D image based on the segmented first set of anatomical elements and the segmented second set of anatomical elements.
According to any of the aspects herein, wherein the removing the first anatomical object further comprises subtracting a pixel corresponding to the first anatomical object from the at least one 2D image of the one or more 2D images.
According to any of the aspects herein, wherein the 3D image is a CT scan, an MRI scan or an ultrasound.
According to any of the aspects herein, wherein the one or more 2D images are fluoroscopic images, MRI images or ultrasound images.
Any aspect may be combined with any one or more other aspects.
Any one or more of the features disclosed herein.
Any one or more of the features are generally disclosed herein.
Any one or more of the features generally disclosed herein are combined with any one or more other features generally disclosed herein.
Any one of the aspects/features/embodiments is combined with any one or more other aspects/features/embodiments.
Any one or more of the aspects or features disclosed herein are used.
It should be understood that any feature described herein may be claimed in combination with any other feature as described herein, whether or not the feature is from the same described embodiment.
The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the technology described in this disclosure will be apparent from the description and drawings, and from the claims.
The phrases "at least one," "one or more," and/or "are open-ended expressions that have both connectivity and separability in operation. For example, the expressions "at least one of A, B and C", "at least one of A, B or C", "one or more of A, B and C", "one or more of A, B or C" and "one of A, B and/or C" mean a alone, B alone, C, A alone and B together, a alone and C together, B alone and C together, or A, B alone and C together. When each of A, B and C in the above description refers to an element such as X, Y and Z or an element such as X 1 -X n 、Y 1 -Y m And Z 1 -Z o The phrase is intended to refer to a single element selected from X, Y and Z, elements selected from the same class (e.g., X 1 And X 2 ) Is combined with (a)Elements selected from two or more classes (e.g. Y 1 And Z o ) Is a combination of (a) and (b).
The term "an" entity refers to one or more of that entity. Thus, the terms "a", "one or more", and "at least one" may be used interchangeably herein. It should also be noted that the terms "comprising" and "having" may be used interchangeably.
The foregoing is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is not an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and configurations. It is intended to neither identify key or critical elements of the disclosure nor delineate the scope of the disclosure, but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As should be appreciated, other aspects, embodiments, and configurations of the present disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
Many additional features and advantages of the invention will become apparent to those skilled in the art upon consideration of the description of embodiments presented below.
Drawings
The accompanying drawings are incorporated in and form a part of this specification to illustrate several examples of the present disclosure. Together with the description, these drawings serve to explain the principles of the disclosure. The drawings only show preferred and alternative examples of how the disclosure may be made and used, and these examples should not be construed as limiting the disclosure to only the examples shown and described. Additional features and advantages will be made apparent from the following more detailed description of various aspects, embodiments and configurations of the present disclosure, as illustrated by the accompanying drawings referenced below.
FIG. 1 is a block diagram of a system according to at least one embodiment of the present disclosure;
FIG. 2 is a flow chart according to at least one embodiment of the present disclosure;
FIG. 3 is a flow chart according to at least one embodiment of the present disclosure;
fig. 4 is a first image of a 2D perspective image in accordance with at least one embodiment of the present disclosure; and is also provided with
Fig. 5 is a second image of a 2D perspective image in accordance with at least one embodiment of the present disclosure.
Detailed Description
It should be understood that the various aspects disclosed herein may be combined in different combinations than specifically presented in the specification and drawings. It should also be appreciated that certain acts or events of any of the processes or methods described herein can be performed in a different order, and/or can be added, combined, or omitted entirely, depending on the example or implementation (e.g., not all of the described acts or events may be required to implement the disclosed techniques in accordance with different implementations of the disclosure). Moreover, although certain aspects of the disclosure are described as being performed by a single module or unit for clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a computing device and/or a medical device.
In one or more examples, the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media corresponding to tangible media, such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors (e.g., intel Core i3, i5, i7, or i9 processors, intel Celeron processors, intel Xeon processors, intel Pentium processors, AMD Ryzen processors, AMD Athlon processors, AMD Phenom processors, apple A10 or 10 Xfusion processors, apple A11, A12X, A Z, or A13 Bionic processors, or any other general purpose microprocessor), graphics processing units (e.g., nvidia GeForce RTX series processor, nvidia GeForce RTX series processor, AMD Radeon RX 5000 series processor, AMD Radeon 6000 series processor, or any other graphics processing unit), application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. In addition, the present techniques may be fully implemented in one or more circuits or logic elements.
Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of "including," "comprising," or "having" and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the present disclosure may use examples to illustrate one or more aspects thereof. The use or listing of one or more examples (which may be indicated by "for example," "by way of example," "such as," or similar language) is not intended to limit the scope of the disclosure, unless expressly stated otherwise.
For registration, images such as Computed Tomography (CT) and intraoperative fluoroscopic images may be noisy and may depict or reflect multiple superimposed anatomical elements along a line between an X-ray source and a detector for capturing the image. In other words, when an image of the patient's spine is desired, the fluoroscopic image of the spine may also depict some or all of the patient's chest and/or any other anatomical elements positioned along a line between the X-ray source and the detector, such that one or more aspects of the patient's spine may be blurred or less clear in the image. This is especially the case when taking images of individuals with a high Body Mass Index (BMI) or when taking images of complex anatomical structures. Noise and superimposed views in the images may increase the difficulty of image registration (because it may be difficult to accurately identify matching gradients or other features across multiple images) and may cause additional time costs, frustrating users, and/or cause surgical postponements or cancellation.
Fig. 4-5 depict example 2D fluoroscopic images for a registration process, such as may be obtained intraoperatively depicting overlapping anatomy of a patient. In fig. 4, ribs 408 overlap bone 404, resulting in a gradient 412 within the boundary of bone 404 that does not correspond to bone 404. Similarly, each rib 408 in the image overlaps a vertebra 416 in the image, thereby creating a gradient 420 positioned within the outer boundary of the corresponding vertebra 416. When aligning vertebrae in the fluoroscopic image 400 with vertebrae in a preoperative CT scan or other image, for example, using gradient matching techniques, the gradient 412 and/or gradient 420 may increase the time required to complete the registration (by increasing the number of gradients in the image that must be analyzed and considered for possible matching), while reducing registration accuracy (e.g., if the gradient 412 and/or gradient 420 are incorrectly identified as corresponding to edges of the vertebrae 416). To address these issues and improve the registration process and results, gradient 412 and/or gradient 420 may be removed using one or more of the methods described herein (and/or one or more aspects thereof) (e.g., method 200 and/or method 300).
Fig. 5 depicts another 2D fluoroscopic image 500 illustrating an overlap between a rib 504 and a vertebra 508 according to at least one embodiment of the present disclosure. The overlap between rib 504 and vertebra 508 in image 500 creates a gradient 512 that falls within the outer boundary of vertebra 508. As described above, the presence of the gradient 512 may negatively impact the registration process, particularly if the registration process uses gradient matching techniques to align the 2D fluoroscopic image with the corresponding vertebrae in the preoperative CT scan or other image. One or more methods disclosed herein (and/or one or more aspects thereof) may be used to remove or reduce the occurrence of gradients 512 in the image 500, thereby improving registration.
Embodiments of the present disclosure allow a surgeon to better confirm registration, improve initial guesses for registration, reduce the number of registration iterations sufficient to complete a surgical procedure or surgical task, and reduce the amount of time required and frustration caused by the registration process.
In accordance with at least some embodiments of the present disclosure, one or more vertebrae are segmented in each of a CT scan image and one or more fluoroscopic images. Portions of the patient anatomy that overlap vertebrae (e.g., ribs) in the image may also be segmented. The portion may then be removed from the image (e.g., resulting in a cleaned-up image) and registration may be made (using the cleaned-up image) between the vertebrae in the CT image and the vertebrae in the fluoroscopic image. The surgeon may then approve or reject the segmented registration version.
Embodiments of the present disclosure provide technical solutions to one or more of the following problems: cleaning noisy intraoperative surgical images; improving registration accuracy of autonomous, semi-autonomous and/or other image guided surgical procedures or surgical protocols; reducing the time required to complete registration in the operating room (thereby saving operating room resources); reducing the registration failure rate; and/or to enhance the visibility of the registration process by the surgeon.
Turning to fig. 1, a block diagram of a system 100 in accordance with at least one embodiment of the present disclosure is shown. The system 100 may be used to facilitate registration of a surgical procedure or surgical procedure; cleaning one or more images (e.g., removing noise or image artifacts in one or more images); and/or performing one or more other aspects of one or more of the methods disclosed herein. The system 100 includes a computing device 102, one or more imaging devices 112, a robot 114, a navigation system 118, a database 130, and/or a cloud or other network 134. Systems according to other embodiments of the present disclosure may include more or fewer components than system 100. For example, the system 100 may not include one or more components of the imaging device 112, the robot 114, the navigation system 118, the computing device 102, the database 130, and/or the cloud 134.
The computing device 102 includes a processor 104, a memory 106, a communication interface 108, and a user interface 110. Computing devices according to other embodiments of the present disclosure may include more or fewer components than computing device 102.
The processor 104 of the computing device 102 may be any processor described herein or any similar processor. The processor 104 may be configured to execute instructions stored in the memory 106 that may cause the processor 104 to perform one or more computing steps with or based on data received from the imaging device 112, the robot 114, the navigation system 118, the database 130, and/or the cloud 134.
Memory 106 may be or include RAM, DRAM, SDRAM, other solid state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer-readable data and/or instructions. Memory 106 may store information or data that may be used to perform any steps of methods 200 and/or 300, or any other method, such as described herein. The memory 106 may store, for example, one or more image processing algorithms 120, one or more segmentation algorithms 122, one or more detection algorithms 124, and/or one or more registration algorithms 128. In some implementations, such instructions or algorithms may be organized into one or more applications, modules, packages, layers, or engines. The algorithms and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging device 112, the robot 114, the database 130, and/or the cloud 134.
Computing device 102 may also include a communication interface 108. The communication interface 108 may be used to receive image data or other information from external sources (e.g., the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component that is not part of the system 100) and/or to transmit instructions, images, or other information to external systems or devices (e.g., another computing device 102, the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component that is not part of the system 100). The communication interface 108 may include one or more wired interfaces (e.g., USB ports, ethernet ports, firewire ports) and/or one or more wireless transceivers or interfaces (configured to transmit and/or receive information, e.g., via one or more wireless communication protocols such as 802.11a/b/g/n, bluetooth, NFC, purple peak, etc.). In some implementations, the communication interface 108 may be used to enable the device 102 to communicate with one or more other processors 104 or computing devices 102, whether to reduce the time required to complete computationally intensive tasks or for any other reason.
The computing device 102 may also include one or more user interfaces 110. The user interface 110 may be or include a keyboard, mouse, trackball, monitor, television, screen, touch screen, and/or any other device for receiving information from a user and/or for providing information to a user. The user interface 110 may be used, for example, to receive user selections or other user inputs regarding any of the steps of any of the methods described herein. Nonetheless, any desired input for any step of any method described herein may be automatically generated by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100. In some embodiments, the user interface 110 may be used to allow a surgeon or other user to modify instructions to be executed by the processor 104 and/or to modify or adjust settings of other information displayed on or corresponding to the user interface 110 in accordance with one or more embodiments of the present disclosure.
Although the user interface 110 is shown as part of the computing device 102, in some embodiments, the computing device 102 may utilize the user interface 110 housed separately from one or more remaining components of the computing device 102. In some embodiments, the user interface 110 may be located proximate to one or more other components of the computing device 102, while in other embodiments, the user interface 110 may be located remotely from one or more other components of the computing device 102.
The imaging device 112 may be used to image anatomical features (e.g., bones, veins, tissue, etc.) and/or other aspects of the patient anatomy to produce image data (e.g., image data depicting or corresponding to bones, veins, tissue, etc.). As used herein, "image data" refers to data generated or captured by the imaging device 112, including data in machine-readable form, graphical/visual form, and in any other form. In different examples, the image data may include data corresponding to anatomical features of the patient or a portion thereof. The image data may be or include pre-operative images, intra-operative images, post-operative images, or images taken independently of any operative procedure. In some implementations, the first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and the second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time that is subsequent to the first time. The imaging device 112 may be capable of capturing 2D images or 3D images to generate image data. The imaging device 112 may be or include, for example, an ultrasound scanner (which may include, for example, physically separate sensors and receivers, or a single ultrasound transceiver); an O-arm, C-arm, G-arm, or any other device that utilizes X-ray based imaging (e.g., a fluoroscope, a CT scanner, or other X-ray machine), any of which may also include physically separate emitters and detectors; a Magnetic Resonance Imaging (MRI) scanner; an Optical Coherence Tomography (OCT) scanner; an endoscope; a microscope; an optical camera; a thermal imaging camera (e.g., an infrared camera); radar systems (which may include, for example, a transmitter, a receiver, a processor, and one or more antennas); or any other imaging device 112 suitable for obtaining an image of anatomical features of a patient. The imaging device 112 may be contained entirely within a single housing, or may include a transmitter/emitter and receiver/detector in separate housings or otherwise physically separated.
In some embodiments, the imaging device 112 may include more than one imaging device 112. For example, the first imaging device may provide first image data and/or a first image, and the second imaging device may provide second image data and/or a second image. In yet other implementations, the same imaging device may be used to provide both the first image data and the second image data and/or any other image data described herein. The imaging device 112 may be used to generate an image data stream. For example, the imaging device 112 may be configured to operate with a shutter that is open, or with a shutter that continuously alternates between open and closed, in order to capture successive images. For the purposes of this disclosure, image data may be considered continuous and/or provided as a stream of image data if the image data represents two or more frames per second, unless otherwise specified.
Robot 114 may be any surgical robot or surgical robotic system. The robot 114 may be or include, for example, a Mazor X TM A stealth robot guidance system. The robot 114 may be configured to position the imaging device 112 at one or more precise locations and orientations and/or return the imaging device 112 to the same location and orientation at a later point in time. The robot 114 may additionally or alternatively be configured to manipulate surgical tools (whether based on guidance from the navigation system 118 or not) to complete or assist in surgical tasks. In some embodiments, the robot 114 may be configured to hold and/or manipulate anatomical elements during or in conjunction with a surgical procedure. The robot 114 may include one or more robotic arms 116. In some embodiments, the robotic arm 116 may include a first robotic arm and a second robotic arm, but the robot 114 may include more than two robotic arms. In some embodiments, one or more of the robotic arms 116 may be used to hold and/or manipulate the imaging device 112. In embodiments where the imaging device 112 includes two or more physically separate components (e.g., a transmitter and a receiver), one robotic arm 116 may hold one such component and another robotic arm 116 may hold another such component. Each robotic arm 116 may be positioned independently of the other robotic arms. The robotic arms may be controlled in a single shared coordinate space or in separate coordinate spaces.
The robot 114, along with the robotic arm 116, may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 116 may be positioned or positionable in any pose, plane, and/or focus. The pose includes a position and an orientation. As a result, the imaging device 112, surgical tool, or other object held by the robot 114 (or more specifically, by the robotic arm 116) may be precisely positioned at one or more desired and specific locations and orientations.
The robotic arm 116 may include one or more sensors that enable the processor 104 (or the processor of the robot 114) to determine the precise pose of the robotic arm (and any objects or elements held by or secured to the robotic arm) in space.
In some embodiments, the reference markers (i.e., navigation markers) may be placed on the robot 114 (including, for example, on the robotic arm 116), the imaging device 112, or any other object in the surgical space. The reference marks may be tracked by the navigation system 118 and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof. In some embodiments, the navigation system 118 may be used to track other components of the system (e.g., the imaging device 112), and the system may operate without the use of the robot 114 (e.g., the surgeon manually manipulates the imaging device 112 and/or one or more surgical tools, for example, based on information and/or instructions generated by the navigation system 118).
During operation, the navigation system 118 may provide navigation to the surgeon and/or surgical robot. The navigation system 118 may be any known or future developed navigation system including, for example, the Medtronic (Medtronic) sealthstation TM S8 a surgical navigation system or any subsequent product thereof. The navigation system 118 may include one or more cameras or other sensors for tracking one or more reference marks, navigation trackers, or other objects within the operating room or other room in which part or all of the system 100 is located. The one or more cameras may be optical cameras, infrared cameras, or other cameras. In some embodiments, the navigation system may include one or more electromagnetic sensors. In various embodiments, the navigation system 118 may be used to track the position and orientation (i.e., pose) of the imaging device 112, the robot 114, and/or the robotic arm 116, and/or one or more surgical tools (or more specifically, to track the pose of a navigation tracker attached directly or indirectly in a fixed relationship to one or more of the foregoing). The navigation system 118 may include a display for displaying one or more images from an external source (e.g., the computing device 102, the imaging device 112, or other sources) or for displaying one or more cameras or images from the navigation system 118 And a display of images and/or video streams of other sensors. In some embodiments, the system 100 may operate without the use of the navigation system 118. The navigation system 118 may be configured to provide guidance to a surgeon or other user of the system 100 or component thereof, to the robot 114 or any other element of the system 100 regarding, for example, the pose of one or more anatomical elements, whether the tool is in an appropriate trajectory, and/or how to perform surgical tasks to move the tool into an appropriate trajectory according to a pre-operative or other surgical plan.
The system 100 or similar system may be used, for example, to perform one or more aspects of any of the methods 200 and 300 described herein. The system 100 or similar system may also be used for other purposes.
Fig. 2 depicts a method 200 that may be used, for example, to register a three-dimensional image (e.g., a CT scan, an MRI scan, or any other 3D image) with a plurality of two-dimensional images (e.g., a plurality of perspective scans or other X-ray based images, a plurality of ultrasound images, or a plurality of 2D images generated using another imaging modality), which have been cleaned prior to registration by removing unwanted artifacts or non-essential anatomical elements displayed in each of the plurality of two-dimensional images. Cleaning the two-dimensional image may advantageously enable faster and/or more accurate registration, for example, by reducing errors associated with mismatched anatomical elements during registration and reducing the number of initial guesses that may be incorrect, for example, provided to or otherwise used in a registration algorithm. As described in more detail below, the method 200 is used to remove noise and incidental anatomical elements from a plurality of two-dimensional images using, for example, gradient matching, pixel subtraction, overlay measurement, combinations thereof, and the like. Nonetheless, additional or alternative image filtering or processing techniques may be implemented in connection with the method 200 (or any other method discussed herein) to facilitate cleaning of multiple two-dimensional images.
The method 200 (and/or one or more steps thereof) may be implemented or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor 104 of the computing device 102 described above. The at least one processor may be part of a robot, such as robot 114, or part of a navigation system, such as navigation system 118. Processors other than any of the processors described herein may also be used to perform the method 200. The at least one processor may perform the method 200 by executing instructions stored in a memory, such as the memory 106. These instructions may correspond to one or more steps of the method 200 described below. The instructions may cause the processor to perform one or more algorithms, such as image processing algorithm 120, segmentation algorithm 122, detection algorithm 124, and/or registration algorithm 128.
The method 200 includes receiving a three-dimensional (3D) image of a patient (step 204). The 3D image may be received via a user interface (such as user interface 110) and/or a communication interface (such as communication interface 108 of a computing device, such as computing device 102) and may be stored in a memory (such as memory 106 of the computing device). The 3D images may also be received from an external database or image repository (e.g., a hospital image storage system, such as a medical image archiving and communication system (PACS), a Health Information System (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records), and/or via the internet or other network. In other embodiments, the 3D image may be received or obtained from an imaging device (such as imaging device 112), which may be any imaging device, such as an MRI scanner, a CT scanner, any other X-ray based imaging device, or an ultrasound imaging device. The 3D image may also be generated by and/or uploaded to any other component of a system, such as system 100. In some embodiments, the 3D image may be received indirectly through any other component of the system or node of the network to which the system is connected.
The 3D image may depict a 3D pose (e.g., position and orientation) of the patient anatomy or a portion thereof. In some embodiments, the 3D image may be captured prior to surgery (e.g., prior to surgery) and may be stored in a system (e.g., system 100) and/or one or more components thereof (e.g., database 130). The stored 3D images may then be received as described above (e.g., by the processor 104) preoperatively (e.g., prior to surgery) and/or intraoperatively (e.g., during surgery). In some embodiments, the 3D image may also depict a plurality of anatomical elements associated with the patient anatomy, including incidental anatomical elements (e.g., vertebrae or other anatomical objects not undergoing surgery or surgical procedure) in addition to the target anatomical elements (e.g., ribs or other anatomical objects to undergo surgery or surgical procedure). The 3D image may include various features corresponding to the anatomy and/or anatomical elements (and/or portions thereof) of the patient, including gradients corresponding to the boundaries and/or contours of the various anatomical elements depicted, different intensity levels corresponding to the different surface textures of the various anatomical elements depicted, combinations thereof, and the like. The 3D image may depict any portion or site of the patient's anatomy and may include, but is in no way limited to, one or more vertebrae, ribs, lungs, soft tissue (e.g., skin, tendons, muscle fibers, etc.), patella, collarbone, scapula, combinations thereof, and the like.
In at least one embodiment, the 3D image may depict one or more views of at least one vertebra (e.g., lumbar vertebra). In this embodiment, at least one vertebra may be planned for a surgical procedure or procedure by a system (e.g., system 100), a surgeon, and/or a combination thereof, for example, drilling, cutting, tapping, articulating, sawing, or other operation with an autonomously or semi-autonomously operated surgical tool. The 3D image may be a CT scan of the patient prior to surgery (e.g., one day prior to surgery, two days prior to surgery, one week prior to surgery, etc.), and may include a depiction of at least one vertebra and at least one rib (and/or any other anatomical elements in the vicinity of at least one vertebra). Once received, the CT scan may be used by the system and/or components thereof (e.g., computing device 102) as described below to perform registration between the CT scan and an intraoperatively captured scan (e.g., a scan captured during a surgical procedure such as a perspective scan) to enable image-based guidance (e.g., using navigation system 118) to guide a surgical tool (which may be attached to a robot such as robot 114 and/or robotic arm 116) to perform or assist a surgeon in performing a surgical procedure (e.g., drilling into a vertebra).
In some embodiments, the method 200 may alternatively receive a plurality of 2D images in step 204, and may use these 2D images to generate a 3D model of the patient anatomy (e.g., depicting one or more anatomical elements located within the depicted region of the patient anatomy). In these embodiments, the 2D image may be captured preoperatively or intraoperatively (e.g., using an imaging device such as imaging device 112). The 2D images may be captured in various formats (e.g., perspective, ultrasound, etc.) and may be used for registration, for example, and as described later.
The method 200 further includes segmenting a first set of anatomical elements from the 3D image (step 208). The segmentation of the first set of anatomical elements may be performed preoperatively and/or intraoperatively, and the result of the segmentation is to identify one or more discrete anatomical elements. In other words, segmentation produces a set of one or more anatomical elements with known boundaries within the 3D image. Thus, for example, two adjacent vertebrae may be distinguished as separate vertebrae rather than being considered as a single anatomical element. Additionally or alternatively, portions of the 3D image corresponding to ribs may be so identified, and portions of the image corresponding to vertebrae may be so identified. The first set of anatomical elements is a collection of anatomical elements (e.g., ribs, vertebrae, soft tissue, organs, etc.) depicted in the 3D image that may be grouped or otherwise labeled as a set, for example, by one or more algorithms described herein. The first set of anatomical elements may include only a single type of anatomical element (e.g., vertebrae), or a single class of anatomical elements (e.g., bone anatomy) or various anatomical elements. In some embodiments, the first set of anatomical elements can include both target anatomical elements (for spinal surgery, e.g., vertebrae) and incidental anatomical elements (for spinal surgery, e.g., ribs). Segmentation may include identifying both the target anatomical element and the incidental anatomical element with different and/or unique identifiers (e.g., labels, tags, highlights, etc.). For example, the target anatomical element may be highlighted in a first color (e.g., when displayed on a user interface), while the incidental or extraneous anatomical element may be highlighted in a second color that is different from the first color.
In some embodiments, step 208 may include segmenting the 3D image using a processor (e.g., processor 104) and a segmentation algorithm (e.g., segmentation algorithm 122) to obtain the first set of anatomical elements. In some embodiments, the segmentation algorithm may select, crop, identify, or otherwise mark the first set of anatomical elements such that the first set of anatomical elements is visually and/or graphically distinguishable from the rest of the 3D image (e.g., the portion of the 3D image that is not included in the first set of anatomical elements). In some embodiments, the segmentation algorithm may be trained on, for example, simulation and/or historical data (e.g., data from a previous surgical procedure or surgical procedure, data based on previous imaging of the patient) to receive the 3D image and segment the 3D image to obtain or output the first set of anatomical elements.
The method 200 further includes receiving a plurality of two-dimensional (2D) images of the patient (step 212). The plurality of 2D images may be received from a system (e.g., system 100) and/or components thereof (e.g., database 130). In some embodiments, multiple 2D images may be captured intraoperatively (e.g., by imaging device 112, which may or may not be connected to robotic arm 116). The plurality of 2D images may depict various views of one or more portions of the patient's anatomy (e.g., a plurality of 2D images of various angles of the patient when the patient is in a prone position). Similar to the received 3D images, each 2D image of the plurality of 2D images may depict a plurality of anatomical features of the patient anatomy, including incidental anatomical features (e.g., ribs or other anatomical elements unrelated to the surgical or operative task) and target anatomical features (e.g., vertebrae or other anatomical elements to which the surgical or operative task is to be performed). However, unlike 3D images, each 2D image of the plurality of 2D images may include one or more "superimposed views" or views of overlapping anatomical elements, wherein the anatomical elements represented in any given superimposed view depend on which anatomical elements lie within the line of sight of the imaging device (or in other words, which elements are positioned along a line between the emitter and the detector for an X-ray imaging device).
In at least one embodiment, the plurality of 2D images may depict one or more views of at least one vertebra (e.g., lumbar vertebra) or other target anatomical element. In this embodiment, planning a surgical procedure or surgical procedure to be performed by a system (e.g., system 100), a surgeon, and/or a combination thereof may focus on at least one vertebra. The planning may require a surgical tool connected to an autonomously or semi-autonomously operated robotic arm to position the surgical tool to drill, cut, tap, articulate, saw, or otherwise manipulate at least one vertebra. The plurality of 2D images may be or include perspective images taken intraoperatively (e.g., when the patient is on an operating table) and may depict various views or angles of the at least one vertebra (e.g., at different angles, by moving the imaging device and capturing the perspective images in a pose relative to the at least one vertebra).
In some embodiments, step 212 includes receiving only a single 2D image. In such embodiments, one or more aspects of step 212 and method 200 may be performed (e.g., by processor 104) with a single 2D image instead of multiple 2D images. For example, the steps of method 200 (such as segmentation and registration discussed below) may be performed on a single 2D image instead of multiple 2D images or with the single 2D image.
The method 200 further includes segmenting a second set of anatomical elements from the plurality of 2D images (step 216). Step 216 of segmenting the second set of anatomical elements from each 2D image of the plurality of 2D images may be similar or identical to step 208 (e.g., using the processor 104 and the segmentation algorithm 122). For example, the second set of anatomical elements can include target anatomical elements (for spinal surgery, e.g., vertebrae) and incidental anatomical elements (for spinal surgery, e.g., ribs).
In some embodiments, the second set of anatomical elements may include anatomical elements that may be seen, located, or otherwise identified in all 2D images of the plurality of 2D images. For example, any anatomical feature or set of anatomical features that may be identified in some but not all of the plurality of 2D images (e.g., soft tissue that may be identified in a 2D image captured when the imaging device is orthogonal to the back of the patient but may not be identified in a 2D image captured when the imaging device is parallel to the back of the patient) may be excluded from the second set of anatomical elements. In these embodiments, the comparison algorithm may determine that the anatomical feature is located in all 2D images of the plurality of 2D images by: simulation and/or historical data regarding the actual or expected occurrence of anatomical features at each of the various angles and poses used to capture the plurality of 2D images (e.g., data regarding patient anatomy based on previous surgery, simulation data of anatomical features, etc.) is used and matched with the occurrence of anatomical features in each of the plurality of 2D images.
In some embodiments, the segmentation may be based at least on the expected shape of the anatomical element. The desired shape may be a predetermined curve, structure, contour, curvature (and/or combinations thereof) that is uniquely associated with the anatomical element or that is capable of distinguishing the anatomical element from other anatomical elements. For example, the desired shape of the ribs may be different from the desired shape of the vertebrae. The segmentation algorithm may train (using real image data and/or artificial/simulated image data) the expected shape of one or more anatomical elements depicted in the 2D image and may segment the 2D image based on such training. In at least one embodiment, the segmentation algorithm may segment the at least one rib and the at least one vertebra based on an expected shape of the at least one rib and the at least one vertebra.
Moreover, in some embodiments, segmenting the second set of anatomical elements from the plurality of 2D images in step 216 is based on the 3D images received in step 204 and/or based on the segmentation of the first set of anatomical elements in step 208. For example, segmentation may include orienting the 3D image to reflect a pose of an imaging device used to capture one 2D image of the plurality of 2D images, and then determining which anatomical elements under the 3D image are visible in that pose. Additional information useful for segmenting the second set of anatomical elements from the plurality of 2D images may also be obtained by analyzing the 3D images from different angles (e.g., obtaining information about the anatomical element position, orientation, and/or spacing in a dimension not shown in the 2D images). Boundary information corresponding to each such anatomical element in the 3D image may also be used to determine how to correctly segment the anatomical element visible in the 2D image.
The method 200 further includes cleaning the plurality of 2D images (step 220). Cleaning (e.g., removing extraneous or noisy portions of the plurality of 2D images) may occur after segmentation of each 2D image of the plurality of 2D images. In this case, the cleaning may occur only on the portion of each 2D image of the plurality of 2D images that contains the target anatomical feature (e.g., a segment containing at least one vertebra). The cleaning may include one or more steps such as gradient matching, identifying image overlaps, pixel subtraction, combinations thereof, and the like, as described below in connection with method 300.
In at least one embodiment, an image processing algorithm (e.g., image processing algorithm 120) may identify (based on segmentation of the 3D image and the plurality of 2D images) a target anatomical element such as a vertebra and an extraneous or incidental anatomical element such as a rib. Additionally, the image processing algorithm may identify one or more portions of the image where vertebrae overlap ribs (and vice versa). The image processing algorithm may then remove some or all of the image data associated with the ribs from the image. In some embodiments, the image processing algorithm may detect or receive information corresponding to one or more gradients in the segmented 2D image (e.g., a change in pixel values that may indicate a boundary between an anatomical element and the background and/or another anatomical element) to identify the overlap. For example, the vertebrae may overlap with the ribs (and vice versa), which may result in one or more gradients appearing in the surface area of the vertebrae in the 2D image, which in turn indicates that the vertebrae overlap with another anatomical element (in this case, a rib). The image processing algorithm may identify a portion of the segmented vertebrae in the 2D image that contains pixels associated with the ribs (e.g., include one or more of the regions of the image that contain pixels associated with only the ribs and one or more regions of the image that contain pixels associated with both due to overlap between the ribs and vertebrae) and may remove the pixels associated therewith and/or image data (e.g., using pixel subtraction) leaving the image free of pixels representing the ribs or corresponding to the ribs. In some embodiments, additionally or alternatively, cleaning may include subtracting pixels associated with any and/or all incidental anatomical elements based on the occurrence of one or more gradients in the surface region of the target anatomical element.
In some implementations, additionally or alternatively, the image processing algorithm may use the 3D image to verify the overlap. For example, the image processing algorithm may analyze pixel values associated with the 3D image from the same angle as the 2D image was captured, and may use data associated with a second, different angle that provides missing depth information about the 2D image (e.g., relative distance of anatomical elements depicted in the 2D image in a third dimension) to determine actual boundaries and/or gradients that should occur in the 2D image.
The method 200 further includes registering between the 3D image and the plurality of 2D images based on the first set of segmented anatomical elements and the second set of segmented anatomical elements (step 224). Step 224 may use a processor (e.g., processor 104) that utilizes, for example, a registration algorithm such as registration algorithm 128. The registration algorithm may convert, map, or create a correlation between the 3D image and/or components thereof and each 2D image of the plurality of 2D images, which may then be used by the system (e.g., system 100) and/or one or more components thereof (e.g., navigation system 118) to convert one or more coordinates in the patient coordinate space to one or more coordinates in the robot (e.g., robot 114) coordinate space and/or vice versa. As previously described, registration may include registration between a 3D image (e.g., a CT scan) and one or more 2D images (e.g., a perspective image) and/or vice versa, and/or registration between a 2D image and another 2D image and/or vice versa.
Registration utilizes at least one anatomical element of the segmented first set of anatomical elements and at least one anatomical element of the segmented second set of anatomical elements. For example, where segmentation would result in identifying the first and second sets of anatomical elements such that each anatomical element in the first and second sets is known to be a particular unique anatomical element, such identification may be used to align a given anatomical element depicted in the 3D image with the same anatomical element depicted in one or more 2D images of the plurality of 2D images. As a more specific example, where segmentation would result in the identification of the 3D image and the L3 vertebrae (including boundaries thereof) in one or more of the plurality of 2D images, registration may include aligning the L3 vertebrae depicted in the 3D image with the L3 vertebrae depicted in one or more of the plurality of 2D images. Such alignment may be sufficient to allow registration of one or more of the plurality of 2D images to the 3D image, or one or more additional alignments may be performed using one or more segmented additional anatomical elements until one or more of the plurality of 2D images are properly aligned with the 3D image and registration between the images may be determined, performed, or otherwise accomplished (e.g., mapping the images to each other such that a particular point in one image may be easily identified for a corresponding point in the other image).
In some embodiments, advanced alignment techniques may be utilized in conjunction with step 224. For example, where the 3D image depicts a patient in a first pose and the plurality of 2D images depict a patient in a second pose, the registration may include modifying the 3D image by adjusting the position of one or more of the segmented anatomical elements such that the pose of the segmented anatomical element (e.g., the first set of anatomical elements) in the 3D image corresponds to the pose of the segmented anatomical element (e.g., the second set of anatomical elements) in the 2D image. In this way, the 3D image may not only be registered to one or more 2D images of the plurality of 2D images, but may also be updated to reflect the current (e.g., intra-operative) pose of the patient.
In some embodiments, the registration may be based on one or more gradients. For example, the registration algorithm may align, match, and/or map one or more gradients from the 3D images to a corresponding one or more gradients in each of the 2D images. In some implementations, the registration algorithm may determine a first set of identifiers or characteristics (e.g., pixel values, changes in one or more pixel values in different directions, average pixel value changes in one or more directions in the image, etc.) associated with one or more gradients depicted in the 3D image and compare these identifiers or characteristics to a second set of identifiers or characteristics calculated based on each of the 2D images. Based on the similarity or pattern (e.g., the same pixel value variation) between the two sets of identifiers or characteristics, the registration algorithm may determine that the gradients present in the 3D image correspond to gradients present in one or more of the 2D images. Determining the presence of a corresponding gradient in both images may cause a system (e.g., system 100) and/or one or more components thereof (e.g., navigation system 118) to convert or map the location of one or more anatomical features in the 3D image to corresponding one or more anatomical features in one or more of the 2D images, and vice versa. Once completed, the registration may be used, for example, to facilitate a surgical or surgical task (e.g., control a robot and/or robotic arm with patient anatomy and/or provide image-based guidance to a surgeon).
In some embodiments, the 3D image does not include any fiducials, tracking marks, or other non-anatomical registration aids depicted therein. In these embodiments, the plurality of 2D images also does not include any fiducial, tracking marks or other non-anatomical registration aids depicted therein. Instead, one or more anatomical elements of the segmented first set of anatomical elements and one or more anatomical elements of the segmented second set of anatomical elements are used to align the images for registration.
In still further embodiments, registration may be performed intraoperatively (e.g., during a surgical procedure) one or more times to update, adjust, and/or refresh the current registration. For example, a new 3D image and/or a new plurality of 2D images may be intraoperatively captured, and new registration may thereby be accomplished (e.g., using the preoperative 3D image and the new plurality of intraoperative 2D images, the new intraoperative 3D image, and the new plurality of 2D images or others). For example, if the pose of the patient changes or is changed during a surgical procedure, updated registration may be required.
The present disclosure encompasses embodiments of the method 200 that include more or fewer steps than those described above and/or one or more steps different than those described above.
Fig. 3 depicts a method 300, one or more aspects of which may be used, for example, to clean, filter, or otherwise improve image quality associated with one or more 2D images of a plurality of 2D images. In general, method 300 describes aspects that may be included or utilized in conjunction with step 220 of method 200 discussed above. However, additionally or alternatively, the steps in method 300 discussed below may also be applied to 3D images. In some embodiments, one or more of the steps of method 300 may also be used in addition to or in lieu of step 220.
The method 300 (and/or one or more steps thereof) may be implemented or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor 104 of the computing device 102 described above. The at least one processor may be part of a robot, such as robot 114, or part of a navigation system, such as navigation system 118. Processors other than any of the processors described herein may also be used to perform the method 300. The at least one processor may perform the method 300 by executing instructions stored in a memory, such as the memory 106. These instructions may correspond to one or more steps of the method 300 described below. The instructions may cause the processor to perform one or more algorithms, such as image processing algorithm 120, segmentation algorithm 122, detection algorithm 124, and/or registration algorithm 128.
The method 300 includes determining that the first anatomical element overlaps the second anatomical element as depicted by the image (e.g., one 2D image of the plurality of 2D images described above in connection with the method 200) (step 304). In some embodiments, the first anatomical element and the second anatomical element may be elements in a set of anatomical elements (e.g., a set of elements depicted in at least one 2D image of a plurality of 2D images, such as a plurality of perspective images). The first and second anatomical elements may be segmented (e.g., as described above in connection with step 216) along the entire set of elements to which the first and second anatomical elements pertain. Step 304 may identify an overlap between the first anatomical element and the second anatomical element using an image processing algorithm, such as image processing algorithm 120. For example, the image processing algorithm may receive a 2D image (e.g., a single perspective image) depicting both the first anatomical element and the second anatomical element as a single, substantially continuous (e.g., having substantially similar pixel values) portion of the image, and determine (e.g., based on pixel values, historical data, simulated anatomy, available 3D images of the anatomy in question, etc.) that the portion in question includes an overlap between the first anatomical element and the second anatomical element. In some implementations, the image processing algorithm may receive the segmented 2D image (e.g., a perspective image segmented using, for example, step 216), and may detect or otherwise locate an overlap in the segmented 2D image. In such embodiments, segmentation may be the basis for determining that a portion of the image corresponds to a first anatomical element that overlaps a second anatomical element, or vice versa, or otherwise facilitating the determination.
In some embodiments, the image processing algorithm may be or include one or more machine learning algorithms for identifying overlaps based on training data. For example, the training data may be based on images (whether actual or artificial and/or simulated) of overlapping anatomical features such as overlapping images of ribs and vertebrae. In at least one embodiment, the image processing algorithm may be configured to receive an image depicting a rib and at least one vertebra and detect an overlap between the rib and the at least one vertebra. Such detection may be facilitated using anatomical atlases, information about vertebrae and/or rib shapes, and/or other information. In embodiments in which the segmented 2D image received by the image processing algorithm contains multiple anatomical elements such as a target anatomical element (e.g., vertebrae or other anatomical structures for which surgery or surgical procedure is planned or ongoing) and an accompanying anatomical element (e.g., ribs or other anatomical structures for which surgery or surgical procedure is not planned or ongoing), the image processing algorithm may identify an overlap based on the relative pixel values and/or coordinates of each of the anatomical elements. For example, if the segmented 2D image shows that the rib overlaps the vertebra (or vice versa), the image processing algorithm may identify that the rib is not isolated from the vertebra (e.g., based on average pixel values of the rib and/or vertebra) and/or vice versa. The image processing algorithm may, for example, make this identification based on pixel values along the boundaries of the anatomical element and/or variations thereof (e.g., a typical vertebra may have a first pixel value variation between the vertebra boundary and the surrounding soft tissue image, which may be different than when the vertebra boundary overlaps the rib); pixel values or averages of pixel values within a region (e.g., overlapping of two anatomical elements may increase the average of pixel values in the region containing the overlap); training data (e.g., history of pixel values associated with overlaps and/or artificial/analog data); combinations of the above; etc. In some embodiments, the image processing algorithm may receive a segmented 2D image in which the second set of anatomical elements has been marked, highlighted, sketched, or otherwise marked to represent each unique anatomical element and/or one or more characteristics thereof. For example, the segmented 2D image may have unique contours surrounding the second set of anatomical elements and/or individual anatomical elements thereof (when viewed, for example, on a user interface), and the image processing algorithm may be configured to use these determined contours of the anatomical feature and verify whether the anatomical features overlap when the contours intersect or cross.
The method 300 further includes defining a boundary surrounding the overlapping region of the first anatomical element and the second anatomical element (step 308). The overlapping region may be or include one or more portions of an image (e.g., a segmented fluoroscopic image), wherein an anatomical element (e.g., a vertebra) depicted in the image overlaps one or more other anatomical elements (e.g., one or more ribs). In some embodiments, step 308 may utilize a segmentation algorithm, such as segmentation algorithm 122, to segment the overlapping region. For example, the segmentation algorithm may receive the segmented perspective images that have been marked (e.g., using metadata) as having an existing overlap (e.g., received from an image processing algorithm such as the one discussed above in step 304), and may identify the overlapping portion and estimate one or more boundaries therearound.
In at least one embodiment, the received image may be a segmented fluoroscopic image, wherein the overlapping region includes one or more portions of a first anatomical element or tissue (e.g., vertebra) overlapping with a second anatomical element or tissue (e.g., rib). In this embodiment, the segmentation algorithm may identify regions where the first anatomical element overlaps the second anatomical element and/or vice versa, and create a boundary around it (e.g., highlighting, pixel value changes, gradients, etc., such that the boundary is visible when the image is displayed on the user interface). In some embodiments, this boundary may be unique relative to other boundaries rendered on the image (such as boundaries used to segment groups of anatomical elements). For example, the unique boundary may be used by the system (e.g., system 100) and/or one or more components thereof (e.g., navigation system 118, registration algorithm 128, etc.) during registration. In this example, the unique boundary may indicate to the system that any features within the boundary constitute an overlap of anatomical elements (e.g., ribs overlap vertebrae) and omit the use of any features contained within the boundary during registration (e.g., the registration algorithm may omit the use of features within the boundary, such as gradient values, when registering to and/or registering from an image) to reduce the registration error probability associated with incorrect coordinate mapping of the overlapping features. Alternatively, the unique boundaries may be used to identify image regions that include overlapping anatomical elements, such that the region in question may be cleaned according to one or more steps of method 300 in order to clearly display the correct boundaries (whether actual boundaries or estimated) of the target anatomical element therein.
The method 300 further includes detecting a gradient line that lies within a boundary of the first anatomical element depicted by the first image (step 312). The first anatomical element may be an anatomical element (e.g., a vertebra) from a set of anatomical elements (e.g., a set of anatomical elements identified from a plurality of 2D images, such as a plurality of perspective images), and may additionally include surrounding edges (e.g., highlighting, contours, etc.). In some embodiments, the first anatomical element may overlap with one or more other anatomical elements, which may be marked or otherwise marked in the image (e.g., with a border around the overlap, highlighting the overlap region, etc., when rendering the image to the user interface). Step 312 may use a detection algorithm, such as detection algorithm 124, which may detect one or more gradient lines that lie within the boundary of the first anatomical element. In some embodiments, one or more gradient lines may exhibit or may be a change in visual contrast, pixel values, combinations thereof, or the like and may be used by a detection algorithm to identify the type of anatomical element associated with the one or more gradient lines. The detection algorithm may use characteristics associated with one or more gradient lines to identify anatomical elements to which the one or more gradient lines belong (e.g., each anatomical element may have unique or characteristic gradient lines, which in turn may correspond to boundaries of the anatomical element). In some embodiments, the detection algorithm may detect and classify all gradient lines that lie within the boundary (e.g., using a classifier such as a Support Vector Machine (SVM), K nearest neighbor algorithm (KNN), etc.). In these embodiments, the detection algorithm may mark any gradient lines associated with anatomical elements (e.g., ribs during vertebral surgery) that are not related to or incidental to the surgery or surgical procedure. Additionally or alternatively, step 316 may identify gradient lines associated with any overlapping regions between the first anatomical element and any other anatomical element.
The method 300 further includes subtracting pixels corresponding to the second anatomical element from the first image (step 316). The first image is an image depicting the first anatomical element and the second anatomical element described above in connection with step 304. In some embodiments, the first image may be a 2D image from a plurality of 2D images (e.g., fluoroscopic images), the first anatomical element may be a target anatomical element (e.g., vertebrae or other anatomical material for which surgery or surgical tasks are planned or being performed), and the second anatomical element may be an incidental or extraneous anatomical object or tissue (e.g., a rib) for which surgery or surgical procedures are not planned or being performed. Step 300 may use an image processing algorithm, such as image processing algorithm 120, to subtract pixels representing the second anatomical element or to remove image data associated with the second anatomical element. In some implementations, the image processing algorithm may receive previously processed images (e.g., images in which anatomical elements have been determined to overlap, images in which gradient values for various anatomical elements do not match expected values, or images having multiple gradient values, etc.).
In one embodiment, the image processing algorithm may subtract the pixel values associated with the second anatomical element from the first image. For example, if the first anatomical element is a vertebra that overlaps a rib (or vice versa), the image processing algorithm may isolate pixels associated with the rib (e.g., based on segmentation and/or the results of any aspect of the method 300 described above) and may subtract or otherwise remove the pixels from the image and/or image data. In some embodiments, only certain portions of the first anatomical element may be subtracted or removed. For example, the image processing algorithm may remove non-overlapping portions of the second anatomical element (e.g., portions of the second anatomical element that do not overlap any other anatomical element in the image) while preserving portions of the second anatomical element that overlap other anatomical elements. In such embodiments, pixels associated with the non-overlapping portion of the second anatomical element may be subtracted or cleared, which may eliminate the visibility of the non-overlapping portion of the second anatomical element (when viewed on the user interface).
In some embodiments, the image processing algorithm may use or calculate an average pixel value for pixels associated with the second anatomical element (e.g., pixels including non-overlapping portions from the second anatomical element, pixels from overlapping portions of the second anatomical element and the first anatomical element, and/or combinations thereof) and subtract the average pixel value from pixels associated with the second anatomical element. In still other embodiments, the pixel values associated with the non-overlapping portions of the second anatomical element may be cleared or set to background pixel values (e.g., average pixel values based on the image excluding the average pixel values of the values associated with the segmented anatomical elements as determined by segmentation and/or image processing algorithms), while the pixels associated with the overlapping portions of the second anatomical element may have the average pixel values of the non-overlapping portions of the second anatomical element subtracted therefrom (e.g., such that the pixel values of the overlapping portions are approximately the pixel values associated with the first anatomical element).
In some implementations, pixel subtraction may be based on one or more gradient lines. As previously described, one or more gradient lines may be identified and associated with an anatomical element or a set of anatomical elements (as discussed in step 220). The image processing algorithm may receive an image including anatomical elements identified or marked based on one or more gradient lines, and may remove pixels of incidental or extraneous anatomical elements based on the gradient lines. For example, an image processing algorithm may receive an image where there may be gradient lines connected to the ribs (possibly independent of the surgery performed on the vertebrae). The image processing algorithm may then remove one or more pixels associated with the rib (e.g., a portion or an entirety of the rib) based on the contour or boundary of the rib.
In some implementations, pixel subtraction may be based on the expected shape of one or more anatomical elements present in the image. As previously discussed, the desired shape may be a predetermined curve, structure, contour, curvature, etc. associated with an anatomical element that can distinguish or uniquely identify the anatomical element. The expected shape may be based on the 3D image of the anatomical element in question, data available in one or more other 2D images of the anatomical element in question, an anatomical atlas, a 3D model of the anatomical element in question, or any other information about the anatomical element in question. In some embodiments, the image processing algorithm may receive an image including information related to an expected shape of each of the anatomical elements present in the image. The image processing algorithm may then remove pixels associated with unrelated or incidental anatomical elements based on the expected shape of the anatomical element. For example, a first anatomical element may be identified as a rib and a second anatomical element may be identified as a vertebra. The image processing algorithm may then remove, modify, or subtract some or all of the pixel values associated with the extraneous anatomical element (in this case, the rib).
The present disclosure encompasses embodiments of the method 300 comprising more or fewer steps than those described above and/or one or more steps different than those described above.
As described above, the present disclosure encompasses methods having fewer than all of the steps identified in fig. 2 and 3 (and corresponding descriptions of methods 200 and 300), as well as methods including additional steps beyond those identified in fig. 2 and 3 (and corresponding descriptions of methods 200 and 300). The present disclosure also encompasses methods comprising one or more steps from one method described herein and one or more steps from another method described herein. Any of the correlations described herein may be or include registration or any other correlation.
The foregoing is not intended to limit the disclosure to one or more of the forms disclosed herein. In the foregoing detailed description, for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. Features of aspects, embodiments, and/or configurations of the present disclosure may be combined in alternative aspects, embodiments, and/or configurations than those discussed above. The methods of the present disclosure should not be construed as reflecting the following intent: the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Furthermore, while the foregoing has included descriptions of one or more aspects, embodiments and/or configurations, and certain variations and modifications, other variations, combinations, and modifications are within the scope of this disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims (22)

1. A method, the method comprising:
receiving a Computed Tomography (CT) image of a patient;
segmenting a first set of anatomical elements from the CT image;
receiving a plurality of fluoroscopic images of the patient;
segmenting a second set of anatomical elements from the plurality of fluoroscopic images; and
registration between the CT image and the plurality of fluoroscopic images is created based on the segmented first set of anatomical elements and the segmented second set of anatomical elements.
2. The method of claim 1, wherein the segmenting the second set of anatomical elements further comprises determining that a first anatomical element overlaps a second anatomical element.
3. The method of claim 2, wherein the first anatomical element is a vertebra and the second anatomical element is a rib.
4. The method of claim 2, wherein the determining comprises detecting a gradient line within a boundary of the second anatomical element in at least one of the plurality of fluoroscopic images.
5. The method of claim 2, further comprising subtracting pixels corresponding to the first anatomical element from at least one of the plurality of fluoroscopic images.
6. The method of claim 5, wherein the subtracting is based on information about an expected shape of at least one of the first anatomical element or the second anatomical element.
7. The method of claim 1, wherein creating the registration comprises matching at least one first gradient corresponding to at least one anatomical element of the first set of anatomical elements with at least one second gradient corresponding to at least one anatomical element of the second set of anatomical elements.
8. The method of claim 1, further comprising removing one or more gradient lines from at least one of the plurality of fluoroscopic images.
9. The method of claim 1, wherein the first set of anatomical elements comprises at least one of patella or soft tissue anatomical elements.
10. A system, the system comprising:
a processor; and
a memory storing instructions that, when executed by the processor, cause the processor to:
receiving a three-dimensional (3D) image of a patient anatomy;
segmenting a first set of anatomical elements from the 3D image;
causing an imaging device to capture one or more two-dimensional (2D) images of the patient anatomy;
segmenting a second set of anatomical elements from the one or more 2D images;
cleaning the one or more 2D images by removing at least one gradient line from each 2D image of the one or more 2D images; and
the one or more cleaned 2D images are registered to the 3D image based on the segmented first set of anatomical elements and the segmented second set of anatomical elements.
11. The system of claim 10, wherein the segmenting comprises determining that a first anatomical element overlaps a second anatomical element.
12. The system of claim 11, wherein the segmentation is based on information about an expected shape of at least one of the first anatomical element or the second anatomical element.
13. The system of claim 10, wherein the at least one gradient line is located in an anatomical element in the second set of anatomical elements.
14. The system of claim 10, wherein the segmentation of the segmented second set of anatomical elements further comprises defining a boundary around at least one anatomical tissue.
15. The system of claim 14, further comprising subtracting pixels corresponding to the at least one anatomical tissue from the segmented second set of anatomical features.
16. The system of claim 14, wherein the boundary defines a region indicative of overlap between the at least one anatomical tissue and the segmented anatomical object of the second set of anatomical elements.
17. The system of claim 10, wherein the segmenting further comprises identifying one or more gradient lines associated with each anatomical element in the first set of anatomical elements.
18. The system of claim 10, wherein the 3D image and the one or more 2D images omit use of fiducials.
19. A system, the system comprising:
a processor;
an imaging device; and
a memory having instructions stored thereon that, when executed by the processor, cause the processor to:
generating a three-dimensional (3D) model;
causing the imaging device to capture one or more two-dimensional (2D) images;
segmenting a first set of anatomical elements from the 3D image;
segmenting a second set of anatomical elements from each 2D image of the one or more 2D images, the segmenting comprising defining a boundary of a first anatomical object;
removing the first anatomical object from at least one 2D image of the one or more 2D images to produce one or more cleaned 2D images;
the one or more cleaned 2D images are registered to the 3D image based on the segmented first set of anatomical elements and the segmented second set of anatomical elements.
20. The system of claim 19, wherein the removing the first anatomical object further comprises subtracting pixels corresponding to the first anatomical object from the at least one of the one or more 2D images.
21. The system of claim 19, wherein the 3D image is a CT scan, an MRI scan, or an ultrasound.
22. The system of claim 19, wherein the one or more 2D images are fluoroscopic images, MRI images, or ultrasound images.
CN202280016344.4A 2021-02-23 2022-02-20 Registration of computed tomography to perspective using segmented input Pending CN116917942A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/152,656 2021-02-23
US17/590,010 2022-02-01
US17/590,010 US12190526B2 (en) 2021-02-23 2022-02-01 Computed tomography to fluoroscopy registration using segmented inputs
PCT/IL2022/050197 WO2022180624A1 (en) 2021-02-23 2022-02-20 Computer tomography to fluoroscopy registration using segmented inputs

Publications (1)

Publication Number Publication Date
CN116917942A true CN116917942A (en) 2023-10-20

Family

ID=88361378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280016344.4A Pending CN116917942A (en) 2021-02-23 2022-02-20 Registration of computed tomography to perspective using segmented input

Country Status (1)

Country Link
CN (1) CN116917942A (en)

Similar Documents

Publication Publication Date Title
US12190526B2 (en) Computed tomography to fluoroscopy registration using segmented inputs
EP4298604B1 (en) Computer tomography to fluoroscopy registration using segmented inputs
EP3007635B1 (en) Computer-implemented technique for determining a coordinate transformation for surgical navigation
US20250127572A1 (en) Methods and systems for planning a surgical procedure
US20250152261A1 (en) Systems and methods for registering one or more anatomical elements
EP4264548B1 (en) Registration of time-separated x-ray images
EP4147204B1 (en) Systems, methods, and devices for detecting anatomical features
US12400360B2 (en) Systems and methods for single image registration update
US20250143816A1 (en) System and method for aligning an imaging device
US12361557B2 (en) Systems and methods for monitoring one or more anatomical elements
CN116917942A (en) Registration of computed tomography to perspective using segmented input
WO2022234568A1 (en) Systems and methods for generating multiple registrations
EP4284289A1 (en) Bone entry point verification systems and methods
US20250315955A1 (en) Systems and methods for monitoring one or more anatomical elements
US12156705B2 (en) Systems and methods for generating multiple registrations
US20220241016A1 (en) Bone entry point verification systems and methods
WO2024180545A1 (en) Systems and methods for registering a target anatomical element
WO2025109596A1 (en) Systems and methods for registration using one or more fiducials
WO2025046505A1 (en) Systems and methods for patient registration using 2d image planes
CN117279586A (en) System and method for generating multiple registrations
WO2025229497A1 (en) Systems and methods for generating one or more reconstructions
CN116648724A (en) Systems and methods for monitoring one or more anatomical elements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination