WO2025046505A1 - Systèmes et procédés d'enregistrement de patient à l'aide de plans d'image 2d - Google Patents
Systèmes et procédés d'enregistrement de patient à l'aide de plans d'image 2d Download PDFInfo
- Publication number
- WO2025046505A1 WO2025046505A1 PCT/IB2024/058414 IB2024058414W WO2025046505A1 WO 2025046505 A1 WO2025046505 A1 WO 2025046505A1 IB 2024058414 W IB2024058414 W IB 2024058414W WO 2025046505 A1 WO2025046505 A1 WO 2025046505A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image plane
- processor
- patient
- image
- imaging device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present disclosure is generally directed to image registration, and relates more particularly to registration using stereo image data.
- Imaging may be used by a medical provider for diagnostic and/or therapeutic purposes during a surgery or surgical procedure.
- Patient anatomy can change over time, particularly following placement of a medical implant in the patient anatomy.
- Example aspects of the present disclosure include:
- a system comprises: a processor; and a memory coupled to the processor and storing data thereon that, when processed by the processor, enables the processor to: receive three-dimensional (3D) scan data depicting a portion of a patient generated with an imaging device; project the 3D scan data into a plurality of two- dimensional (2D) image planes including a first 2D image plane and a second 2D image plane; match, based on a geometry of the portion of the patient, an area of the first 2D image plane to an area of the second 2D image plane that corresponds to a common location in 3D space; and register, based on the matching, the 3D scan data and the imaging device.
- 3D three-dimensional
- the memory stores further data for processing by the processor that, when processed, enables the processor to: generate a similarity metric indicating an amount of similarity between the area of the first 2D image plane and the area of the second 2D image plane.
- the memory stores further data for processing by the processor that, when processed, enables the processor to: iteratively optimize an objective function associated with the similarity metric.
- the processor further: generates an initial guess for a transformation matrix that transforms coordinates from an imaging device coordinate system associated with the imaging device to an exam coordinate system associated with the 3D scan data; generates a second similarity metric that indicates an amount of similarity between a second area of the first 2D image plane and a second area of the second 2D image plane; determines, based on a combination of the first similarity metric and the second similarity metric, the objective function; and adjusts the transformation matrix using the objective function.
- a system comprises: an imaging device; a processor; and a memory coupled to the processor and storing data thereon that, when processed by the processor, enables the processor to: receive three-dimensional (3D) scan data depicting a portion of a patient generated with the imaging device; project the 3D scan data into a plurality of two-dimensional (2D) image planes including a first 2D image plane and a second 2D image plane; match, based on a geometry of the portion of the patient, an area of the first 2D image plane to an area of the second 2D image plane that corresponds to a common location in 3D space; and register, based on the matching, the 3D scan data and the imaging device.
- 3D three-dimensional
- the memory stores further data for processing by the processor that, when processed, enables the processor to: generate a similarity metric indicating an amount of similarity between the area of the first 2D image plane and the area of the second 2D image plane.
- the memory stores further data for processing by the processor that, when processed, enables the processor to: iteratively optimize an objective function associated with the similarity metric.
- the processor further: generates an initial guess for a transformation matrix that transforms coordinates from an imaging device coordinate system associated with the imaging device to an exam coordinate system associated with the 3D scan data; generates a second similarity metric that indicates an amount of similarity between a second area of the first 2D image plane and a second area of the second 2D image plane; determines, based on a combination of the first similarity metric and the second similarity metric, the objective function; and adjusts the transformation matrix using the objective function.
- a method comprises: receiving three-dimensional (3D) scan data depicting a portion of a patient generated with an imaging device; projecting the 3D scan data into a plurality of two-dimensional (2D) image planes including a first 2D image plane and a second 2D image plane; matching, based on a geometry of the portion of the patient, an area of the first 2D image plane to an area of the second 2D image plane that corresponds to a common location in 3D space; and registering, based on the matching, the 3D scan data and the imaging device.
- 3D three-dimensional
- iteratively optimizing the objective function further comprises: generating an initial guess for a transformation matrix that transforms coordinates from an imaging device coordinate system associated with the imaging device to an exam coordinate system associated with the 3D scan data; generating a second similarity metric that indicates an amount of similarity between a second area of the first 2D image plane and a second area of the second 2D image plane; determining, based on a combination of the first similarity metric and the second similarity metric, the objective function; and adjusting the transformation matrix using the objective function.
- the geometry of the portion of the patient comprises information about a contour of an anatomical element of the patient.
- each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
- each one of A, B, and C in the above expressions refers to an element, such as X, Y, and Z, or class of elements, such as Xl-Xn, Y 1-Ym, and Zl-Zo
- the phrase is intended to refer to a single element selected from X, Y, and Z, a combination of elements selected from the same class (e.g., XI and X2) as well as a combination of elements selected from two or more classes (e.g., Y 1 and Zo).
- FIG. 1 is a block diagram of a system according to at least one embodiment of the present disclosure
- FIG. 2A shows additional aspects of the system according to at least one embodiment of the present disclosure
- Fig. 2B shows a projection of three-dimensional (3D) scan data into a plurality of two- dimensional (2D) image planes according to at least one embodiment of the present disclosure
- Fig. 2C shows aspects the plurality of 2D image planes according to at least one embodiment of the present disclosure
- Fig. 3 is a flowchart according to at least one embodiment of the present disclosure.
- Fig. 4 is a flowchart according to at least one embodiment of the present disclosure.
- the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware -based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions).
- Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
- processors such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A 10 or 10X Fusion processors; Apple Al l, A 12, A12X, A12Z, or Al 3 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry
- DSPs digital signal processors
- a patient is registered to a coordinate system of a surgical navigation system using stereo image data.
- the process of registration involves registering a 3D image (e.g., patient exam data) to a 3D scene (e.g., a patient on an operating table) by first projecting the 3D scene onto two or more 2D image planes.
- One approach for completing the registration is to de -project the 2D images to 3D space by finding corresponding pixels in multiple 2D image planes and using triangulation to determine the position of the area represented by the pixel relative to the cameras in 3D space.
- the stereo imaging system may search pixel-by-pixel looking for correspondences that can be de -projected to construct a 3D point cloud.
- the 3D image of the scene and the 3D exam can then be registered using point cloud registration techniques.
- the process of de -projecting from the 2D space to the 3D space requires identifying corresponding pixels using similarity metrics between the images.
- the matching of the corresponding pixels may limit the accuracy of the registration.
- systems and methods for registering the exam data in the 2D image planes provide a constraint on this correspondence problem to improve the accuracy of the registration.
- a known geometry is used to constrain the correspondence problem. Instead of searching for individual corresponding pixels, the systems and methods of embodiments disclosed herein may search for a set of correspondences that best describe a known geometry.
- the known geometry may be a 3D medical image (e.g., a preoperative 3D scan of the patient). The registration process may be performed entirely in 2D image planes.
- An algorithm according to at least one example embodiment may begin with an initial guess for a pose (e.g., a position and an orientation) of the patient relative to a camera. The 3D exam data of the patient may then be projected onto the 2D image planes.
- a similarity metric is calculated for the region surrounding the projected pixel locations in the 2D images.
- the similarity metrics of the point(s) are then combined into an overall objective function describing how well the initial guess fits the raw image data.
- the objective function can then be used to adjust the pose of the patient and/or adjust the transformation matrix, and the process may be repeated until a best fit is achieved (e.g., until the objective function is optimized).
- other image -based registration metrics such as mutual information, contour matching, combinations thereof, and/or the like can be included in the objective function to further improve registration accuracy.
- Embodiments of the present disclosure beneficially improve the accuracy of photo-based patient registration by constraining the stereo matching problem using known geometric data, which may enable single-snapshot patient registration.
- Embodiments of the present disclosure may also beneficially enable a multi-camera and/or stereo camera registration system.
- the resolution of the stereo camera may be determined by the resolution of the image sensors used in the stereo camera.
- the resolution of the stereo camera can be limited by the need to check for correspondences between all captured pixels. As the resolution increases, the processing time also increases. Using systems and methods described herein, only projected points from the exam point cloud are checked for correspondence. Accordingly, increased resolution of the camera sensors does not increase processing time.
- Embodiments of the present disclosure provide technical solutions to one or more of the problems of (1) high resolution stereo camera systems creating increased processing times, (2) inaccurate registration, and (3) inaccurate correspondence matching.
- a block diagram of a system 100 may be used to register a patient to a surgical navigation system coordinate system; to register patient exam data to a camera or other imaging device; to control, pose, and/or otherwise manipulate a surgical mount system and/or surgical tools attached thereto; and/or to carry out one or more other aspects of one or more of the methods disclosed herein.
- the system 100 comprises a computing device 102, one or more imaging devices 112, a robot 114, a navigation system 118, a database 130, and/or a cloud or other network 134.
- Systems according to other embodiments of the present disclosure may comprise more or fewer components than the system 100.
- the system 100 may not include the robot 114, one or more components of the computing device 102, the database 130, and/or the cloud 134.
- the computing device 102 is illustrated to include a processor 104, a memory 106, a communication interface 108, and a user interface 110.
- Computing devices according to other embodiments of the present disclosure may comprise more or fewer components than the computing device 102.
- the processor 104 of the computing device 102 may be any processor described herein or any similar processor.
- the processor 104 may be configured to execute instructions stored in the memory 106, which instructions may cause the processor 104 to carry out one or more computing steps utilizing or based on data received from the imaging device 112, the robot 114, the navigation system 118, the database 130, and/or the cloud 134.
- the processor 104 may be or comprise one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
- DSPs digital signal
- the memory 106 may be or comprise RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer- readable data and/or instructions.
- the memory 106 may store information or data useful for completing, for example, any step of the methods 300 and/or 400 described herein, or of any other methods.
- the memory 106 may store, for example, instructions and/or machine learning models that support one or more functions of the computing device 102, the imaging devices 112, the navigation system 118, and/or the like.
- the communication interface 108 may be used for receiving image data or other information from an external source (such as the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component not part of the system 100), and/or for transmitting instructions, images, or other information to an external system or device (e.g., another computing device 102, the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component not part of the system 100).
- an external system or device e.g., another computing device 102, the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component not part of the system 100.
- the communication interface 108 may comprise one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.1 la/b/g/n, Bluetooth, NFC, ZigBee, and so forth).
- the communication interface 108 may be useful for enabling the device 102 to communicate with one or more other processors 104 or computing devices 102, whether to reduce the time needed to accomplish a computing-intensive task or for any other reason.
- the computing device 102 may also comprise one or multiple user interfaces 110.
- the user interface(s) 110 may be or comprise a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user.
- the user interface(s) 110 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100.
- the user interface 110 may be useful to allow a surgeon or other user to modify instructions to be executed by the processor 104 according to one or more embodiments of the present disclosure, and/or to modify or adjust a setting of other information displayed on the user interface 110 or corresponding thereto.
- the computing device 102 may utilize a user interface 110 that is housed separately from one or more remaining components of the computing device 102.
- the user interface 110 may be located proximate one or more other components of the computing device 102, while in other embodiments, the user interface 110 may be located remotely from one or more other components of the computer device 102.
- the imaging device 112 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.).
- image data refers to the data generated or captured by an imaging device 112, including in a machine -readable form, a graphical/visual form, and in any other form.
- the image data may be or comprise 3D image data 132 generated by one or more 3D imaging device(s) 140 (e.g., an 0-arm, a C-arm, a G- arm, a CT scanner, etc.) and/or 2D image data 136 generated by one or more 2D imaging device(s) 144 (e.g., an emitter/detector pair).
- 3D imaging device(s) 140 e.g., an 0-arm, a C-arm, a G- arm, a CT scanner, etc.
- 2D image data 136 generated by one or more 2D imaging device(s) 144 (e.g., an emitter/detector pair).
- the image data may comprise data corresponding to an anatomical feature of a patient, or to a portion thereof.
- the image data may be or comprise a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure.
- the imaging device 112, the 3D imaging device(s) 140, and/or the 2D imaging device(s) 144 may be capable of taking a 2D image or a 3D image to yield the image data.
- the imaging device 112 may be or comprise, for example, a stereo camera, an ultrasound scanner (which may comprise, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an 0-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, or other X-ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may comprise, for example, a transmitter, a receiver, a processor, and one or more antennae), or any other imaging device 112 suitable for obtaining images of an anatomical feature of a patient.
- X-ray-based imaging e.g., a fluoroscope, a CT scanner, or other X-ray machine
- MRI magnetic resonance
- the imaging device 112 may be contained entirely within a single housing, or may comprise a transmitter/emitter and a receiver/detector that are in separate housings or are otherwise physically separated.
- a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time
- a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time.
- the first imaging device can be an imaging device that is used for navigation (e.g., in conjunction with the navigation system 118).
- the first imaging device may be or comprise, for example, any of the example imaging devices described above (an ultrasound scanner, an 0-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging, a magnetic resonance imaging scanner, an OCT scanner, an endoscope, a microscope, an optical camera, a thermographic camera, a radar system, a stereo camera, etc.).
- the second imaging device may be an imaging device used for registration (e.g., to facilitate alignment of patient scan data with the patient in the surgical environment).
- the second imaging device may be or comprise a stereo camera, as discussed in further detail below.
- the imaging device 112 may comprise more than one imaging device 112.
- a first imaging device may provide first image data and/or a first image
- a second imaging device may provide second image data and/or a second image.
- the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein.
- the imaging device 112 may be operable to generate a stream of image data.
- the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images.
- image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
- reference markers e.g., navigation markers
- the reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by an operator of the system 100 or any component thereof.
- the imaging device 112 may be or comprise a stereo camera or other similar imaging device with two or more lenses and separate image sensors for each lens.
- Each image sensor may generate image information to the computing device 102, which may use image processing 120 to generate an image from the image information generated by the image sensor.
- the image sensors may be physically spaced apart or otherwise separated from one another in the imaging device 112, such that each image sensor captures a different view of an object imaged by the imaging device 112.
- the stereo camera may be used to facilitate registration of a patient to a coordinate system associated with the navigation system, as discussed in further detail below.
- the robot 114 may be any surgical robot or surgical robotic system.
- the robot 114 may be or comprise, for example, the Mazor XTM Stealth Edition robotic guidance system.
- the robot 114 may be configured to position the imaging device 112 at one or more precise position(s) and orientation/ s), and/or to return the imaging device 112 to the same position(s) and orientation(s) at a later point in time.
- the robot 114 may additionally or alternatively be configured to manipulate a surgical tool (whether based on guidance from the navigation system 118 or not) to accomplish or to assist with a surgical task.
- the robot 114 may be configured to hold and/or manipulate an anatomical element during or in connection with a surgical procedure.
- the robot 114 may comprise one or more robotic arms 116.
- the robotic arm 116 may comprise a first robotic arm and a second robotic arm, though the robot 114 may comprise more than two robotic arms.
- one or more of the robotic arms 116 may be used to hold and/or maneuver the imaging device 112.
- the imaging device 112 comprises two or more physically separate components (e.g., a transmitter and receiver)
- one robotic arm 116 may hold one such component
- another robotic arm 116 may hold another such component.
- Each robotic arm 116 may be positionable independently of the other robotic arm.
- the robotic arms 116 may be controlled in a single, shared coordinate space, or in separate coordinate spaces.
- the robot 114 together with the robotic arm 116, may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 116 may be positioned or positionable in any pose, plane, and/or focal point. The pose includes a position and an orientation. As a result, an imaging device 112, surgical tool, or other object held by the robot 114 (or, more specifically, by the robotic arm 116) may be precisely positionable in one or more needed and specific positions and orientations.
- the robotic arm(s) 116 may comprise one or more sensors that enable the processor 104 (or a processor of the robot 114) to determine a precise pose in space of the robotic arm (as well as any object or element held by or secured to the robotic arm).
- reference markers may be placed on the robot 114 (including, e.g., on the robotic arm 116), the imaging device 112, or any other object in the surgical space.
- the reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof.
- the navigation system 118 can be used to track other components of the system (e.g., imaging device 112) and the system can operate without the use of the robot 114 (e.g., with the surgeon manually manipulating the imaging device 112 and/or one or more surgical tools, based on information and/or instructions generated by the navigation system 118, for example).
- the navigation system 118 may provide navigation during an operation.
- the navigation system 118 may be any now-known or future-developed navigation system, including, for example, the Medtronic StealthStationTM S8 surgical navigation system or any successor thereof.
- the navigation system 118 may include one or more cameras or other sensor(s) for tracking one or more reference markers, navigated trackers, or other objects within the operating room or other room in which some or all of the system 100 is located.
- the one or more cameras may be optical cameras, infrared cameras, or other cameras.
- the navigation system 118 may comprise one or more electromagnetic sensors.
- the navigation system 118 may be used to track a position and orientation (e.g., a pose) of the imaging device 112, the robot 114 and/or the robotic arm 116, and/or one or more surgical tools (or, more particularly, to track a pose of a navigated tracker attached, directly or indirectly, in fixed relation to the one or more of the foregoing).
- the navigation system 118 may include a display for displaying one or more images from an external source (e.g., the computing device 102, imaging device 112, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 118.
- the system 100 can operate without the use of the navigation system 118.
- the navigation system 118 may be configured to provide guidance to a surgeon or other user of the system 100 or a component thereof or to any other element of the system 100 (e.g., the robot 114) regarding, for example, a pose of one or more anatomical elements, whether or not a tool is in the proper trajectory, and/or how to move a tool into the proper trajectory to carry out a surgical task according to a preoperative or other surgical plan.
- the database 130 may store information that correlates one coordinate system to another (e.g., a patient coordinate system to a navigation coordinate system or vice versa).
- the database 130 may additionally or alternatively store, for example, one or more surgical plans (including, for example, pose information about a target and/or image information about a patient’s anatomy at and/or proximate the surgical site, for use by the robot 114, the navigation system 118, and/or a user of the computing device 102 or of the system 100); one or more images useful in connection with a surgery to be completed by or with the assistance of one or more other components of the system 100; and/or any other useful information.
- surgical plans including, for example, pose information about a target and/or image information about a patient’s anatomy at and/or proximate the surgical site, for use by the robot 114, the navigation system 118, and/or a user of the computing device 102 or of the system 100
- the database 130 may be configured to provide any such information to the computing device 102 or to any other device of the system 100 or external to the system 100, whether directly or via the cloud 134.
- the database 130 may be or comprise part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- a hospital image storage system such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- the cloud 134 may be or represent the Internet or any other wide area network.
- the computing device 102 may be connected to the cloud 134 via the communication interface 108, using a wired connection, a wireless connection, or both.
- the computing device 102 may communicate with the database 130 and/or an external device (e.g., a computing device) via the cloud 134.
- the system 100 or similar systems may be used, for example, to carry out one or more aspects of any of the methods 300 and/or 400 described herein.
- the system 100 or similar systems may also be used for other purposes.
- FIG. 2A depicts an imaging device 202 that operates as a stereo camera with a first image sensor 204 and a second image sensor 208 that can be used to capture stereo images of the patient 212.
- the imaging device 202 may be similar to or the same as the imaging device 112 or any other imaging device depicted and/or described herein.
- the first image sensor 204 and the second image sensor 208 are each capable of generating information or image data that can be processed to produce a pair of 2D images of the patient 212.
- the first image sensor 204 and the second image sensor 208 may be separated by a predetermined or otherwise known distance, such that the field of view (FOV) of the first image sensor 204 is different than the FOV of the second image sensor 208.
- the first image sensor 204 may have a first FOV 216 while the second image sensor 208 may have a different second FOV 220.
- data generated by the first image sensor 204 is different from the data generated by the second image sensor 208 when the imaging device 202 images the patient 212, such that each image of the pair of 2D images depicts features of the patient 212 from different angles, directions, and/or orientations.
- a first 2D image of the pair of 2D images depicts features of the patient 212 in a first orientation
- a second 2D image of the pair of 2D images depicts the features of the patient 212 from a different angle, direction, and/or orientation.
- the first image sensor 204 is positioned left of a centerline 214 of the patient 212 while the second image sensor 208 is positioned to the right of the centerline 214 of the patient 212.
- the resulting data from the first image sensor 204 when processed by image processing 120, may result in an image depicting a left-side view of the patient 212 (also referred to herein as a left eye view of the patient 212).
- the resulting data from the second image sensor 208 when processed by the image processing 120, may result in an image depicting a rightside view of the patient 212 (also referred to herein as a right eye view of the patient 212).
- one or both of the 2D images may be rendered to a display to enable a user of the system 100 to view the 2D image(s).
- the 2D images e.g., the right-side view and/or the left-side view
- the 2D images may be rendered to a display to enable a user of the system 100 to view the 2D image(s).
- additional or alternative imaging of the patient may occur, and that any portion of the patient 212 may be imaged by the stereo camera to produce a pair of 2D images depicting the portion of the patient 212.
- the pose of the first image sensor 204 relative to the second image sensor 208 may be known or determined by the system 100.
- the pose information may be stored in the database 130. Additionally or alternatively, the pose information may be determined by the navigation system 118 based on the pose of tracked fiducials in a known pose relative to the first image sensor 204 and/or the second image sensor 208.
- the imaging device 202 may project one or more light fields that can be used during the course of stereo image capture.
- the imaging device 202 may project light fields in a predetermined pattern that are captured by the first image sensor 204 and the second image sensor 208.
- the pair of 2D images may include depictions of the light field in different orientations.
- the light fields may be used as additional information when performing correspondence matching between the pair of 2D images to improve accuracy of the correspondence matching.
- the stereo camera with the first image sensor 204 and the second image sensor 208 may be used during the course of registering the patient 212 to the imaging device 202.
- the registration may begin with the processor 104 receiving 3D scan data (which may be similar to or the same as 3D image data 132) associated with the patient (e.g., preoperative images, intraoperative images taken before the registration process, etc.).
- the processor 104 may access the database 130 to retrieve the 3D scan data, may receive the 3D scan data from another component of the system 100, or may receive the 3D scan data from a component outside of the system 100.
- the 3D scan data may be or comprise a CT scan, an MRI scan, fluoroscopic images, combinations thereof, and/or the like comprising information about one or more portions of the patient 212.
- the 3D scan data may be a CT scan of a portion or the entirety of the patient 212 depicting the pose of one or more anatomical elements that was taken before the registration process.
- the processor 104 may perform an initial guess of the pose of the patient 212 relative to the imaging device 202, and generate a transformation 124 that maps coordinates associated with the imaging device 202 to a coordinate system associated the patient 212.
- the initial guess of the patient pose may be based on information retrieved from the database 130 (such as previously-used initial guesses from previous similar surgeries or surgical procedures or a default initial guess), information received via the user interface 110 (e.g., a user such as a physician enters an initial guess), information generated by the processor 104 (e.g., a random number generator), a default initial guess based on a recommended setup of the operating room (e.g., based on how the stereo camera is positioned relative to the operating table), combinations thereof, and/or the like.
- the transformation 124 may be or comprise one or more algorithms, machine learning models, artificial intelligence models, combinations thereof, and/or the like capable of transforming coordinates associated with one coordinate system into coordinates associated with another, different coordinate system.
- the transformation 124 may comprise one or more transformation matrices that transform the 3D coordinates associated with the imaging device 202 into coordinates associated with the patient 212.
- the processor 104 may project the 3D scan data onto a pair of 2D image planes.
- the pair of 2D image planes may represent or correspond to the image planes seen by the first image sensor 204 and the second image sensor 208 of the imaging device 202.
- the pair of 2D image planes may be similar to any stereo image planes discussed herein.
- the pair of 2D image planes comprises a left image plane 224 that is associated with the first image sensor 204 and a right image plane 228 that is associated with the second image sensor 208.
- the projection onto the pair of 2D image planes is based on the expected views or depictions of the 3D scan data generated by the first image sensor 204 and the second image sensor 208 when the patient 212 is imaged by the imaging device 202.
- the processor 104 projects the 3D scan data onto the left image plane 224 to depict a left eye image that would have been generated using information from the first image sensor 204 had the patient 212 been imaged by the imaging device 202 while the patient 212 was in the initial guess pose.
- the processor 104 projects the 3D scan data into the right image plane 228 to depict an image that would have been generated using information from the second image sensor 208 had the patient 212 been imaged by the imaging device 202 while the patient 212 was in the initial guess pose.
- the processor 104 may use image processing 120 when projecting the 3D scan data onto the left image plane 224 and the right image plane 228.
- the image processing 120 may be or comprise one or more algorithms, machine learning models, artificial intelligence models, combinations thereof, and/or the like that map the 3D scan data to 2D image planes.
- the image processing 120 may map each 3D data point in the 3D scan data to corresponding locations in each of the left image plane 224 and the right image plane 228.
- a point 240 of the 3D scan data may be projected along a left trace line 232 into the left image plane 224, and along a different right trace line 236 to the right image plane 228, resulting in the point 240 being depicted in a different location in the left image plane 224 than in the right image plane 228.
- the processor 104 may perform correspondence matching between the two 2D image planes.
- the correspondence matching includes the processor 104 identifying points in both the left image plane 224 and the right image plane 228 that correspondence to the same location in 3D space.
- the point 240 corresponds to a point in 3D space, but is depicted differently in the left image plane 224 than in the right image plane 228.
- the processor 104 may identify the point 240 in both a portion of the left image plane 224 and a portion of the right image plane 228 and determine that these two portions represent different views of the point 240.
- the processor 104 may perform correspondence matching for each data point from the 3D scan that was projected into the pair of 2D image planes.
- the correspondence matching may be limited by one or more parameters.
- the processor 104 uses a pixel-by-pixel evaluation of left image plane 224 and the right image plane 228, the individual differences in pixel value between the depiction of any data point in the 3D scan data may result in the processor 104 failing to match pixels that correspond to the same data point.
- This failure of correspondence matching for some data points may negatively impact the accuracy of the registration, since fewer matching data points result in sparser 3D point clouds, which may result in larger uncertainty of the position of the patient 212 in 3D space.
- the one or more parameters may be used to confine the evaluation of the left image plane 224 and the right image plane 228, such that a greater number of correspondences are matched.
- the one or more parameters comprises a predetermined or known geometry of the patient 212.
- the processor 104 may know and use the geometry of one or more portions of the patient 212 when performing correspondence matching.
- a left area 244 of the left image plane 224 may include to a first feature 252 associated with a known geometry or portion of the patient 212 (e.g., a structure just under the eye, a spinous process of a vertebra, etc.) that the processor 104 may identify.
- the processor 104 may identify a similar or the same feature in the right image plane 228, such as a second feature 256 in a right area 248.
- the correspondence may ensure that the matched areas correspond to the same location in 3D space.
- the one or more parameters comprises contour matching when comparing portions of the left image plane 224 and the right image plane 228.
- the first feature 252 may be a curved portion of the spine of the patient 212.
- the processor 104 may identify the first feature 252 and then search for a corresponding curve in the right image plane 228.
- the processor 104 may determine that the first feature 252 and the second feature 256 correspond to the location in 3D space.
- the processor 104 may use image processing 120 (e.g., edge detection algorithms) to identify the edges or contours of various portions of the 2D images while performing the correspondence matching.
- the one or more parameters comprises mutual information matching when comparing portions of the left image plane 224 and the right image plane 228.
- Mutual information may be based on a pattern of pixel values over a portion of the left image plane 224 and/or the right image plane 228, regardless of the actual values of the pixel.
- the projection of the 3D scan data into the pair of 2D image planes may result in an instance where the point 240 includes the first feature 252 that changes from a high pixel value to a low pixel value in the left image plane 224, while the second feature 256 of the right image plane 228 changes from a low pixel value to a high pixel value, with both features 252, 256 displaying the same amount of change.
- the processor 104 may identify that the features, though having different pixel values that change in different directions, represent the same location in 3D space, and may correspond the two features together.
- the one or more parameters may be based on the region surrounding the pixels in each of the pair of 2D image planes.
- the geometry, contour, or other information may be captured in a region around each pixel (e.g., a 3x2 pixel region, 3x3 pixel region, a 3x4 pixel region, 4x4 pixel region, a 5x5 pixel region, etc.).
- the processor 104 may compare a measurement of the similarity between the two features or regions to a threshold value (e.g., a value stored in the database 130) and correspond the two features to the same location in 3D space when the measurement of similarity is greater than or equal to a threshold value.
- a threshold value e.g., a value stored in the database 130
- the processor 104 may generate a similarity metric.
- the similarity metric may be or comprise a measurement or indicator of a degree of match or amount of similarity between the region surrounding the projected pixel locations.
- the amount of similarity may be based on the overall pixel value in the region (e.g., average pixel intensity of the area, average pixel entropy of the area, etc.).
- the processor 104 may construct an objective function based on the similarity metrics that describes how well the initial guess of the patient pose fits the 3D projected data.
- the objective function may indicate how accurate the initial guess was at generating a transformation that maps stereo camera coordinates into coordinates associated with the patient.
- the objective function may then be iteratively optimized to adjust the transformation to better align projected data with the stereo camera, and may used the optimized transformation to register the stereo camera with the patient 212, as discussed in further detail below.
- the 3D scan data of the patient is projected into 2D images, while any data captured by the imaging device 202 is not projected into 2D images.
- the 3D scan data may be or comprise a 3D model constructed based on a preoperative image of soft tissues and/or hard tissues of the patient 212 captured before a surgery or surgical procedure.
- the 3D model may be a vector-based CAD model or the like that comprises a point cloud of data that is projected into the 2D images.
- the nature of the 3D model may change based on the imaging device used to generate the 3D scan data (e.g., a 3D model generated from a CT scan may comprise more data points than a 3D model based on an x-ray emitter/detector configuration).
- the 3D model may also change based on the type of anatomical tissue of the patient 212 that is imaged (e.g., the 3D model may be different for hard tissues than for soft tissues).
- the imaging device 202 may then capture another 3D surgical scene image of the patient 212 (e.g., when the patient 212 is laying on an operating table).
- the processor 104 may then data points from the preoperative image in performing the initial guess, transformation, and iterative optimization of the objective function for registration. Accordingly, additional processing time associated with higher resolution imaging devices 202 can be minimized, effectively conserving resources.
- Fig. 3 depicts a method 300 that may be used, for example, to register 3D scan data to an imaging device.
- the method 300 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to the processor(s) 104 of the computing device 102 described above.
- the at least one processor may be part of a robot (such as a robot 114) or part of a navigation system (such as a navigation system 118).
- a processor other than any processor described herein may also be used to execute the method 300.
- the at least one processor may perform the method 300 by executing elements stored in a memory such as the memory 106.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 300.
- One or more portions of a method 300 may be performed by the processor executing any of the contents of memory, such as an image processing 120, a segmentation 122, a transformation 124, and/or a registration 128.
- the method 300 comprises receiving 3D scan data depicting a portion of a patient generated with an imaging device (step 304).
- the 3D scan data may be based on one or more preoperative or intraoperative images of the patient 212 captured using the imaging device 112 or another imaging device.
- the one or more images may be or comprise a CT scan, an MRI scan, two or more fluoroscopic images, combinations thereof, and/or the like.
- the 3D scan data may be or comprise a point cloud representing one or more portions of the patient 212.
- the portions may comprise features of the patient such as anatomical elements or features (e.g., eyes, nose, ears, mouth, arms, legs, torso, vertebrae, etc.).
- the 3D scan data may be retrieved from the database 130 and rendered to the user interface 110.
- the method 300 also comprises projecting the 3D scan data into a plurality of 2D image planes including a first 2D image plane and a second 2D image plane (step 308).
- the projecting of the 3D scan data may be based on a transformation matrix that, when applied to the imaging device 112 or other imaging device, transforms coordinates associated with the imaging device 112 or other imaging device into coordinates associated with the patient 212.
- the projection onto the pair of 2D image planes is based on the expected views or depictions of the 3D scan data generated by the first image sensor 204 and the second image sensor 208 when the patient 212 is imaged by the imaging device 112.
- the method 300 also comprises matching, based on a geometry of the portion of the patient, an area of the first 2D image plane to an area of the second 2D image plane that corresponds to a common location in 3D space (step 312).
- the matching includes the processor 104 identifying points in both the left image plane 224 and the right image plane 228 that correspondence to the same location in 3D space.
- the point 240 corresponds to a point in 3D space, but is depicted differently in the left image plane 224 than in the right image plane 228.
- the processor 104 may identify the point 240 in both a portion of the left image plane 224 and a portion of the right image plane 228 and determine that these two portions represent different views of the point 240.
- the processor 104 may perform correspondence matching for each data point from the 3D scan that was projected into the pair of 2D image planes.
- the geometry of the patient 212 may be a predetermined or known shape of the patient 212 that may be used to constrain the matching of the areas of the 2D image planes. Since the 2D image planes are constructed from 3D scan data of the patient, the processor 104 may know and use the geometry of one or more portions of the patient 212 when performing correspondence matching. For example, a left area 244 of the left image plane 224 may include to a first feature 252 associated with a known geometry or portion of the patient 212 (e.g., a structure just under the eye, a spinous process of a vertebra, etc.) that the processor 104 may identify.
- a known geometry or portion of the patient 212 e.g., a structure just under the eye, a spinous process of a vertebra, etc.
- the processor 104 may identify a similar or the same feature in the right image plane 228, such as a second feature 256 in a right area 248.
- a similar or the same feature in the right image plane 228, such as a second feature 256 in a right area 248 By constraining the correspondence to require that the same geometry be present in both areas before the areas are matched, the correspondence may ensure that the matched areas correspond to the same location in 3D space.
- the step 312 may include additional or alternative constraints on the correspondence matching.
- contour matching may be implemented when comparing portions of the left image plane 224 and the right image plane 228.
- the contour matching may be based on identified contours within the left image plane 224 and the right image plane 228.
- the first feature 252 may be a curved portion of the spine of the patient 212.
- the processor 104 may identify the first feature 252 and then search for a corresponding curve in the right image plane 228.
- the processor 104 may determine that the first feature 252 and the second feature 256 correspond to the location in 3D space.
- the processor 104 may use image processing 120 (e.g., edge detection algorithms) to identify the edges or contours of various portions of the 2D images while performing the correspondence matching.
- mutual information matching may be used when comparing portions of the left image plane 224 and the right image plane 228.
- Mutual information may be based on a pattern of pixel values over a portion of the left image plane 224 and/or the right image plane 228, regardless of the actual values of the pixel.
- the projection of the 3D scan data into the pair of 2D image planes may result in an instance where the point 240 includes the first feature 252 that changes from a high pixel value to a low pixel value in the left image plane 224, while the second feature 256 of the right image plane 228 changes from a low pixel value to a high pixel value, with both features 252, 256 displaying the same amount of change.
- the processor 104 may identify that the features, though having different pixel values that change in different directions, represent the same location in 3D space, and may correspond the two features together.
- the method 300 also comprises registering, based on the matching, the 3D scan data and the imaging device (step 316).
- the transformation matrix used to project the 3D data may be used as the registration between the 3D scan data and the imaging device.
- the transformation matrix may be the result of an iterative optimization of an objective function, such that the transformation matrix may represent the best mapping from the stereo camera to the patient 212.
- the transformation matrix may be further used, along with the known position of the stereo camera in 3D space, to register the stereo camera and the patient 212 in the surgical scene. Based on these registrations, the patient 212 and the navigation system 118 may then be registered, enabling the navigation system 118 to navigate one or more components (e.g., surgical tools) relative to the patient 212.
- the present disclosure encompasses embodiments of the method 300 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- Fig. 4 depicts a method 400 that may be used, for example, to iteratively optimize an objective function for registration.
- the method 400 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to the processor(s) 104 of the computing device 102 described above.
- the at least one processor may be part of a robot (such as a robot 114) or part of a navigation system (such as a navigation system 118).
- a processor other than any processor described herein may also be used to execute the method 400.
- the at least one processor may perform the method 400 by executing elements stored in a memory such as the memory 106.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 400.
- One or more portions of a method 400 may be performed by the processor executing any of the contents of memory, such as an image processing 120, a segmentation 122, a transformation 124, and/or a registration 128.
- the method 400 comprises receiving 3D scan data depicting a portion of a patient generated with an imaging device (step 404).
- the step 404 may be similar to or the same as the step 304 of the method 300.
- the 3D scan data may be or comprise a point cloud representing one or more portions of the patient 212.
- the portions may comprise features of the patient such as anatomical elements or features (e.g., eyes, nose, ears, mouth, arms, legs, torso, vertebrae, etc.).
- the method 400 also comprises generating an initial guess about a pose of the patient in a surgical environment (step 408).
- the initial guess of the patient pose may be based on information retrieved from the database 130 (such as previously -used initial guesses from previous similar surgeries or surgical procedures or a default initial guess), information received via the user interface 110 (e.g., a user such as a physician enters an initial guess), information generated by the processor 104 (e.g., a random number generator), a default initial guess based on a recommended setup of the operating room (e.g., based on how the stereo camera is positioned relative to the operating table), combinations thereof, and/or the like.
- the initial guess may additionally or alternatively comprise information about the pose of the patient relative to the stereo camera within the surgical environment.
- the method 400 also comprises generating a transformation matrix associated with the pose of the patient (step 412).
- the transformation matrix may be generated by transformation 124, which may be or comprise one or more algorithms, machine learning models, artificial intelligence models, combinations thereof, and/or the like capable of transforming coordinates associated with one coordinate system into coordinates associated with another, different coordinate system.
- the transformation 124 may comprise one or more transformation matrices that transform the 3D coordinates associated with the imaging device 112 into coordinates associated with the patient 212.
- the transformation 124 may generate the transformation matrix based off historical data of other similar surgeries or surgical procedures.
- the method 400 also comprises projecting, using the transformation matrix, the 3D scan data into a left 2D image plane and a right 2D image plane (step 416).
- the left 2D image plane is associated with the first image sensor 204, while the right 2D image plane is associated with the second image sensor 208.
- the projection onto the left and right 2D image planes is based on the expected views or depictions of the 3D scan data generated by the first image sensor 204 and the second image sensor 208 when the patient 212 is imaged by the imaging device 112.
- the processor 104 projects the 3D scan data onto the left image plane 224 to depict a left eye image that would have been generated using information from the first image sensor 204 had the patient 212 been imaged by the imaging device 112 while the patient 212 was in the initial guess pose.
- the processor 104 projects the 3D scan data into the right image plane 228 to depict an image that would have been generated using information from the second image sensor 208 had the patient 212 been imaged by the imaging device 112 while the patient 212 was in the initial guess pose.
- the processor 104 may use image processing 120 when projecting the 3D scan data onto the left image plane 224 and the right image plane 228.
- the image processing 120 may be or comprise one or more algorithms, machine learning models, artificial intelligence models, combinations thereof, and/or the like that map the 3D scan data to 2D image planes.
- the image processing 120 may map each 3D data point in the 3D scan data to corresponding locations in each of the left image plane 224 and the right image plane 228.
- the method 400 also comprises determining, for each point in the 3D scan data, a similarity metric of a region surrounding projected pixel locations for the 2D image planes (step 420).
- the region surrounding the projected pixel locations may comprise one or more proximate pixels (e.g., a 3x2 pixel region, 3x3 pixel region, a 3x4 pixel region, 4x4 pixel region, a 5x5 pixel region, etc.).
- the similarity metric may be or comprise a measurement or indicator of a degree of match or amount of similarity between the region surrounding the projected pixel locations.
- the amount of similarity may be based on the overall pixel value in the region (e.g., average pixel intensity of the area, average pixel entropy of the area, etc.).
- the similarity metric may be based on the Hamming distance between the pixels in the first 2D image plane region and the pixels in the second 2D image plane region after a census transform has been performed in each of the two regions.
- the method 400 also comprises combining similarity metrics into an overall objective function describing how well the initial guess fits the 3D scan data (step 424).
- the objective function may be or comprise a mathematical combination of each similarity metric calculated between the regions of the first 2D image plane and the corresponding region of the second 2D image plane.
- the objective function is the sum of absolute differences between the regions of the 2D image planes. The value of the objective function, when large, may indicate that the first 2D image plane regions do not match well with the regions of the second 2D image plane, suggesting that the transform used to project the 3D scan data could be improved.
- additional or alternative image -based registration metrics discussed herein, such as contour matching and mutual information matching may be integrated into the objective function.
- a measure of the percentage of regions that found complementary or corresponding contours may be provided by the objective function.
- a low value of the objective function may suggest that few contours were found and matched, suggesting the transformation matrix could be improved.
- the method 400 also comprises adjusting, when the objective function is not optimized, the transformation matrix (step 432).
- the processor 104 may determine an adjustment to the transformation matrix to lower the overall objective function.
- the optimization may use one or more numerical optimization methods (e.g., gradient descent, stochastic gradient descent, Newton’s method, etc.) to adjust the objective function toward a maximum or minimum value.
- the adjusted transformation matrix may then be used in steps 416, 420, and 424 to produce an updated objective function.
- Example 1 A system, comprising: a processor; and a memory coupled to the processor and storing data thereon that, when processed by the processor, enables the processor to: receive three-dimensional (3D) scan data depicting a portion of a patient generated with an imaging device; project the 3D scan data into a plurality of two-dimensional (2D) image planes including a first 2D image plane and a second 2D image plane; match, based on a geometry of the portion of the patient, an area of the first 2D image plane to an area of the second 2D image plane that corresponds to a common location in 3D space; and register, based on the matching, the 3D scan data and the imaging device.
- Example 2 The system of example 1, wherein the geometry of the portion of the patient is determined from a 3D image of the portion of the patient.
- Example 3 The system of any of examples 1 to 2, wherein the memory stores further data for processing by the processor that, when processed, enables the processor to: generate a similarity metric indicating an amount of similarity between the area of the first 2D image plane and the area of the second 2D image plane.
- Example 4 The system of example 3, wherein the amount of similarity is based on an entropy of the area of the first 2D image plane and an entropy of the area of the second 2D image plane.
- Example 5 The system of any of examples 3 to 4, wherein the memory stores further data for processing by the processor that, when processed, enables the processor to: iteratively optimize an objective function associated with the similarity metric.
- Example 6 The system of example 5, wherein as part of the iterative optimization, the processor further: generates an initial guess for a transformation matrix that transforms coordinates from an imaging device coordinate system associated with the imaging device to an exam coordinate system associated with the 3D scan data; generates a second similarity metric that indicates an amount of similarity between a second area of the first 2D image plane and a second area of the second 2D image plane; determines, based on a combination of the first similarity metric and the second similarity metric, the objective function; and adjusts the transformation matrix using the objective function.
- the processor further: generates an initial guess for a transformation matrix that transforms coordinates from an imaging device coordinate system associated with the imaging device to an exam coordinate system associated with the 3D scan data; generates a second similarity metric that indicates an amount of similarity between a second area of the first 2D image plane and a second area of the second 2D image plane; determines, based on a combination of the first similarity metric and the second similarity metric, the objective function
- Example 7 The system of any of examples 1 to 6, wherein the geometry of the portion of the patient comprises a contour of an anatomical element of the patient.
- Example 8 A system, comprising: an imaging device; a processor; and a memory coupled to the processor and storing data thereon that, when processed by the processor, enables the processor to: receive three-dimensional (3D) scan data depicting a portion of a patient generated with the imaging device; project the 3D scan data into a plurality of two-dimensional (2D) image planes including a first 2D image plane and a second 2D image plane; match, based on a geometry of the portion of the patient, an area of the first 2D image plane to an area of the second 2D image plane that corresponds to a common location in 3D space; and register, based on the matching, the 3D scan data and the imaging device.
- 3D three-dimensional
- Example 9 The system of example 8, wherein the geometry of the portion of the patient is determined from a 3D image of the portion of the patient.
- Example 10 The system of any of examples 8 to 9, wherein the memory stores further data for processing by the processor that, when processed, enables the processor to: generate a similarity metric indicating an amount of similarity between the area of the first 2D image plane and the area of the second 2D image plane.
- Example 11 The system of example 10, wherein the amount of similarity is based on an entropy of the area of the first 2D image plane and an entropy of the area of the second 2D image plane.
- Example 12 The system of any of examples 10 to 11, wherein the memory stores further data for processing by the processor that, when processed, enables the processor to: iteratively optimize an objective function associated with the similarity metric.
- Example 13 The system of example 12, wherein as part of the iterative optimization, the processor further: generates an initial guess for a transformation matrix that transforms coordinates from an imaging device coordinate system associated with the imaging device to an exam coordinate system associated with the 3D scan data; generates a second similarity metric that indicates an amount of similarity between a second area of the first 2D image plane and a second area of the second 2D image plane; determines, based on a combination of the first similarity metric and the second similarity metric, the objective function; and adjusts the transformation matrix using the objective function.
- Example 14 The system of any of examples 8 to 13, wherein the geometry of the portion of the patient comprises a contour of an anatomical element of the patient.
- Example 15 A method, comprising: receiving three-dimensional (3D) scan data depicting a portion of a patient generated with an imaging device; projecting the 3D scan data into a plurality of two-dimensional (2D) image planes including a first 2D image plane and a second 2D image plane; matching, based on a geometry of the portion of the patient, an area of the first 2D image plane to an area of the second 2D image plane that corresponds to a common location in 3D space; and registering, based on the matching, the 3D scan data and the imaging device.
- 3D three-dimensional
- Example 16 The method of example 15, wherein the geometry of the portion of the patient is determined from a 3D image of the portion of the patient.
- Example 17 The method of any of examples 15 to 16, further comprising: generating a similarity metric indicating an amount of similarity between the area of the first 2D image plane and the area of the second 2D image plane.
- Example 18 The method of example 17, further comprising: iteratively optimizing an objective function associated with the similarity metric.
- Example 19 The method of example 18, wherein iteratively optimizing the objective function further comprises: generating an initial guess for a transformation matrix that transforms coordinates from an imaging device coordinate system associated with the imaging device to an exam coordinate system associated with the 3D scan data; generating a second similarity metric that indicates an amount of similarity between a second area of the first 2D image plane and a second area of the second 2D image plane; determining, based on a combination of the first similarity metric and the second similarity metric, the objective function; and adjusting the transformation matrix using the objective function.
- Example 20 The method of any of examples 15 to 19, wherein the geometry of the portion of the patient comprises information about a contour of an anatomical element of the patient.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Un système ou un procédé selon la présente divulgation comprend un processeur ; et une mémoire couplée au processeur et stockant des données sur celui-ci qui, lorsqu'elles sont traitées par le processeur, permettent au processeur de : recevoir des données de balayage tridimensionnelles (3D) représentant une partie d'un patient générée avec un dispositif d'imagerie ; projeter les données de balayage 3D en une pluralité de plans d'image bidimensionnels (2D) comprenant un premier plan d'image 2D et un second plan d'image 2D ; faire correspondre, sur la base d'une géométrie de la partie du patient, une zone du premier plan d'image 2D à une zone du second plan d'image 2D qui correspond à un emplacement commun dans l'espace 3D ; et enregistrer, sur la base de la mise en correspondance, les données de balayage 3D et le dispositif d'imagerie.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363536253P | 2023-09-01 | 2023-09-01 | |
| US63/536,253 | 2023-09-01 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025046505A1 true WO2025046505A1 (fr) | 2025-03-06 |
Family
ID=92894820
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2024/058414 Pending WO2025046505A1 (fr) | 2023-09-01 | 2024-08-29 | Systèmes et procédés d'enregistrement de patient à l'aide de plans d'image 2d |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025046505A1 (fr) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023006021A1 (fr) * | 2021-07-30 | 2023-02-02 | 武汉联影智融医疗科技有限公司 | Procédé et système d'enregistrement |
-
2024
- 2024-08-29 WO PCT/IB2024/058414 patent/WO2025046505A1/fr active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023006021A1 (fr) * | 2021-07-30 | 2023-02-02 | 武汉联影智融医疗科技有限公司 | Procédé et système d'enregistrement |
| US20240221190A1 (en) * | 2021-07-30 | 2024-07-04 | Wuhan United Imaging Healthcare Surgical Technology Co., Ltd. | Methods and systems for registration |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2023233254A1 (fr) | Enregistrement de clamp de processus spinal et procédés d'utilisation de celui-ci | |
| US20250152261A1 (en) | Systems and methods for registering one or more anatomical elements | |
| WO2023148585A1 (fr) | Systèmes, procédés et dispositifs pour fournir un afficheur augmenté | |
| US20250143816A1 (en) | System and method for aligning an imaging device | |
| US20240404129A1 (en) | Systems, methods, and devices for generating a corrected image | |
| US12361557B2 (en) | Systems and methods for monitoring one or more anatomical elements | |
| CN118678928A (zh) | 用于验证标记的位姿的系统 | |
| WO2025046505A1 (fr) | Systèmes et procédés d'enregistrement de patient à l'aide de plans d'image 2d | |
| US20230401766A1 (en) | Systems, methods, and devices for generating a corrected image | |
| WO2023141800A1 (fr) | Système de positionnement de rayons x mobile | |
| US12249099B2 (en) | Systems, methods, and devices for reconstructing a three-dimensional representation | |
| US20230240659A1 (en) | Systems, methods, and devices for tracking one or more objects | |
| US20250315955A1 (en) | Systems and methods for monitoring one or more anatomical elements | |
| WO2025046506A1 (fr) | Enregistrement d'un patient à l'aide de la stéréovision | |
| WO2025133940A1 (fr) | Systèmes et procédés de recalage de patient à l'aide de motifs lumineux | |
| WO2024180545A1 (fr) | Systèmes et procédés d'enregistrement d'un élément anatomique cible | |
| WO2025109596A1 (fr) | Systèmes et procédés d'enregistrement à l'aide d'un ou de plusieurs repères | |
| WO2025120636A1 (fr) | Systèmes et procédés de détermination du mouvement d'un ou plusieurs éléments anatomiques | |
| CN117320655A (zh) | 用于机器人辅助手术的装置、方法和系统 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24776351 Country of ref document: EP Kind code of ref document: A1 |