[go: up one dir, main page]

WO2025188860A1 - Anatomy reconstruction and imaging tasks based on manual movement of imaging devices - Google Patents

Anatomy reconstruction and imaging tasks based on manual movement of imaging devices

Info

Publication number
WO2025188860A1
WO2025188860A1 PCT/US2025/018516 US2025018516W WO2025188860A1 WO 2025188860 A1 WO2025188860 A1 WO 2025188860A1 US 2025018516 W US2025018516 W US 2025018516W WO 2025188860 A1 WO2025188860 A1 WO 2025188860A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient anatomy
radiographic
radiographic images
patient
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/018516
Other languages
French (fr)
Other versions
WO2025188860A8 (en
Inventor
Scott Arthur Banks
John David COX
George Sheldon JACHODE
Oren Benjamin ANDERSON
JR. Jack Dean COLE
Oliver Christoph KESSLER
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orthopedic Driven Imaging LLC
Original Assignee
Orthopedic Driven Imaging LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orthopedic Driven Imaging LLC filed Critical Orthopedic Driven Imaging LLC
Publication of WO2025188860A1 publication Critical patent/WO2025188860A1/en
Publication of WO2025188860A8 publication Critical patent/WO2025188860A8/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2048Tracking techniques using an accelerometer or inertia sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2059Mechanical position encoders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3966Radiopaque markers visible in an X-ray image

Definitions

  • aspects described herein relate generally to anatomic imaging of a joint using fluoroscopy. More particularly, aspects relate to dynamic imaging of a joint in three dimensions and performing various surgical, clinical, and procedural evaluations during the manipulation of the joint while utilizing the fluoroscopy images for guidance and measurement.
  • TKA total knee arthroplasty
  • a computer-implemented method sequentially captures a plurality of radiographic images of patient anatomy based on manual movement of an imaging device, the manual movement providing a plurality of different radiographic projections that correspond to the capture of the plurality of radiographic images, wherein the sequentially capturing includes monitoring the movement of the imaging device and recording positional information of the imaging device for each of the plurality of different radiographic projections and corresponding capture of the plurality of radiographic image.
  • the method also reconstructs the patient anatomy based on the captured plurality of radiographic images and the recorded positional information, the reconstructing providing a three-dimensional (3D) model of the patient anatomy.
  • the method further registers the 3D model of the patient anatomy to the patient anatomy as reflected in the plurality of radiographic images.
  • the method also builds and outputs an interface that includes the 3D model of the patient anatomy registered to the patient anatomy reflected in at least one radiographic image, the at least one radiographic image being a radiographic image of the plurality of radiographic images or a radiographic image obtained subsequent to the sequentially- capturing.
  • the method also includes sequentially capturing additional radiographic images of the patient anatomy as the patient performs physical movement, where the additional radiographic images reflect anatomical movement of the patient anatomy based on the physical movement, and also includes registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images, and updating the interface to provide an animated representation of the anatomical movement. Additionally, the method can include determining kinematic properties of the patient anatomy based on the registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images and on the anatomical movement as reflected by the additional radiographic images, and updating the interface to indicate the determined kinematic properties.
  • the building includes in the interface a plan, registered to the patient anatomy as reflected in the plurality of radiographic images, for surgical tool interaction with the patient anatomy.
  • the method further includes sequentially capturing additional radiographic images as a user introduces the surgical tool into a field of view of the imaging device such that the surgical tool is reflected in the additional radiographic images, and updating the interface to identify a position of the surgical tool in relation to the plan, to facilitate a desired positioning of the surgical tool for performing the surgical tool interaction with the patient anatomy.
  • the method also includes performing at least one of annotating the 3D model of the patient anatomy with reference geometry or landmarking the patient anatomy as reflected in the plurality of radiographic images, wherein the building includes in the interface annotations based on the annotating or landmarks based on the landmarking.
  • the patient anatomy includes bones of a joint of a patient.
  • the manual movement is dynamically selected and performed by an operator without predefinition of the radiographic projections and without predefinition of positioning of the imaging device about the patient anatomy.
  • a system can include an imaging device and detector arranged in a fixed position relative to each other, an arm to which the imaging device and detector arc attached, where movement of the arm repositions the imaging device and the detector in space and the imaging device and detector remain in the fixed position relative to each other, a display device, a memory, and processing circuit(s) in communication with the memory, ad the system can be configured to perform method(s), example aspects of which are recited above and herein.
  • FIGs. 1A-1B depict example imaging environments to incorporate and/or use in accordance with aspects described herein;
  • FIG. 2 depicts an example imaging environment to incorporate and/or use in accordance with aspects described herein;
  • FIGs. 3A-3C depict an example of a clinical examination in accordance with aspects described herein;
  • FIG. 4 depicts an example conceptual workflow for manual tomography in accordance with aspects described herein;
  • FIG. 5 depicts an example conceptual workflow for a kinematic evaluation in accordance with aspects described herein;
  • FIG. 6 depicts an example conceptual workflow for fluoroscopy -guided navigation for glenoid component alignment during total shoulder arthroplasty in accordance with aspects described herein;
  • FIG. 7 depicts an example conceptual workflow for fluoroscopy-guided navigation and C- arm displacement sensing for glenoid component alignment during total shoulder arthroplasty in accordance with aspects described herein;
  • FIG. 8 depicts an example conceptual workflow for fluoroscopy- and LiDAR- or optical- guided navigation and C-arm displacement sensing for glenoid component alignment during total shoulder arthroplasty in accordance with aspects described herein;
  • FIG. 9 depicts an example representation of relationships between and among aspects described herein for performing imaging tasks based on manual movement of a radiographic imaging system
  • FIG. 10 depicts an example process for anatomy reconstruction and related tasks based on manual movement of an imaging device, in accordance with aspects described herein;
  • FIG. 11 shows an example computer system to incorporate and/or use aspects described herein.
  • Computed Tomography (CT) scanners special-purpose C-arms, and special purpose O- arms are used for acquiring images of three-dimensional (3D) anatomic regions in the clinic or in surgery.
  • C-arm fluoroscopy is used, to some extent, for surgical assessment and navigation
  • current C-arm technology often requires a separate wired connection to a host of expensive add-on technologies in order to augment the minimal capability set natively provided by the C-arm. Consequently, implementing advanced functions like image-guided surgery and model-image registration requires add-on carts that connect to the imaging system and add cost, complexity, capital outlays, additional maintenance demands, and operating room time.
  • the three-dimensional anatomic model comes from a CT (or other 3D imaging, i.e., MRI) scan performed separately from the surgical procedure, and the registration is performed by an add-on computer call connected to the fluoroscopic system.
  • CT or other 3D imaging, i.e., MRI
  • the registration is performed by an add-on computer call connected to the fluoroscopic system.
  • a fluoroscopy system endowed with the capabilities to determine three- dimensional bone or implant pose, and to perform three-dimensional computations onboard the imaging system without any add-on carts, is desired and would provide a much more efficient tool in terms of time, cost, and avoidance of errors for accurate completion of surgical procedures.
  • C-arm imaging machines are never used to acquire images when in motion, and special-purpose C-arms are motorized imaging machines having a programmed path of circular motion around the patient’s anatomy, acquiring images at specific angular increments around the patient to perform three-dimensional reconstruction of the patient’s anatomy.
  • C-arm is to have an image acquisition capability (which does not exist on most systems) and features enabling continuous capture during manual operation.
  • Some existing systems have pulsed imaging capability while the C-arm is in motion, but these systems are not manually operated.
  • a specialized tomography device for instance, a cone bean CT device instrumented with a sensor package that accounts for C-arm displacement for arbitrary motions in order to register the position of the C-arm relative to the patient’s anatomy.
  • an integrated and instrumented device is provided to accomplish advanced functions using C-arms in surgical, procedural, and clinical settings. Also provided are methods/processes for execution by computer system(s), for instance those of, or in communication with, a C-arm or imaging device as described herein to perform described aspects.
  • An imaging system is provided for reconstructing three-dimensional anatomy optimized for surgery, procedural, and clinical settings.
  • Example imaging systems and methods of the present disclosure endow traditional two-dimensional fluoroscopic imaging systems with the ability to acquire three-dimensional anatomic reconstructions and perform other advanced imaging tasks. These may be provided without the addition of expensive motors, control systems and/or specialpurpose add-on elements (such as compute carts) for visualization or other purposes.
  • Aspects presented herein modernize imaging devices (for instance, the C-arm) and create a real-time navigation and image analysis platform for use in, for example, clinical settings, orthopedics, and vascular surgery, as examples.
  • the present disclosure recognizes that three-dimensional anatomic reconstructions can be acquired from an arbitrary distribution of radiographic projections if the spatial pose (i.e., position and orientation) of each projection is known. Further, the present disclosure describes an articulated X-ray positioning device that is instrumented to record the rotation/displacement of each joint, so that the spatial pose for every radiographic projection can be measured.
  • a digital X-ray imaging system is provided that is capable of acquiring images using pulsed/timed image acquisitions so that blur-free images are obtained. This may be performed while the X-ray positioning device is in motion. This allows the system operator to manually move the X-ray system over or around the target anatomy in approximate and/or somewhat arbitrary trajectories to acquire a bundle of X-ray projections from which the three-dimensional anatomy is reconstructed. Some embodiments provide sufficient computational resources onboard the imaging system to permit the rapid reconstruction of the three-dimensional anatomy without transferring the images to a separate device.
  • Example imaging systems and methods also provide a technical specification and software interface that enables third party developers to create imaging applications on and/or for the instrumented imaging platform.
  • pulsed image acquisition with large digital image detectors, instrumented positioner articulations, onboard computational capabilities, and a standardized software interface are proposed to provide a platform for implementing unique radiographic imaging methods, including three-dimensional anatomy reconstruction based on manual movement of the device, three-dimensional skeletal pose and kinematics measurement, surgical procedure guidance, and other methods.
  • This suite of augmentations can extend a standard imaging device (for instance, the C-arm) into an all-in-one, instrumented device capable of performing a multitude of radiographic imaging modalities.
  • the present disclosure provides hardware and software systems capable of executing image acquisition and measurement tasks to implement a protocol of TKA kinematic observation, or any type of protocol involving observation of human joints during activities of daily living.
  • a hardware and software solution is provided herein as a stand-alone portable U-arm system (or C-arm) for use, for example, in a clinic, or as an upgrade to common C -arm systems that can be used in the operating room, or stationary U-arm systems that can be used in the clinic.
  • a common software environment operating on these platforms can provide image acquisition and analysis functions to implement, pre-operative planning, kinematic examinations, surgical guidance, needle guidance, etc., and the results of one examination can be shared with another compatible platform.
  • An imaging system may include an imaging platform with an image chain.
  • the image chain may include a fluoroscopic class system that includes an x-ray pulse generator system, a solid-state x-ray image receptor and a computer system to coordinate the function of the x-ray generator and the image receptor.
  • the imaging system may also include a computer software package for implementation with the computer system.
  • a method of using the imaging system may include evaluating kinematics of a joint of a patient, pre-, intra-, and/or- post-procedurally.
  • TKA may be an example procedure, before, during, or after which a patient’s knee can be kinematically evaluated in accordance with aspects described herein.
  • example kinematic examination described herein analyzes the motions of the knee joint as the subject patient anatomy
  • aspects described herein, including imaging systems and methods of the present disclosure can be applied to any joint (or more broadly, patient anatomy and/or an implant) for determining its motion.
  • specific procedural and surgical examples are provided, the present disclosure is applicable to any of varying other procedural and surgical examples, even those not explicitly stated herein.
  • Imaging systems of the present disclosure overcome this limitation because they have the capability to perform imaging, such as limited arc cone -beam computed tomography (CBCT) to create the bone and/or implant models.
  • CBCT limited arc cone -beam computed tomography
  • a kinematic examination may include a relatively brief scan of the patient’s joint in which the system operator sweeps the imaging system over/around the target joint while a collection of fluoroscopic images are recorded.
  • the resulting images are used to reconstruct the 3D bones and/or implants of the joint (and any other objects of interest), and optionally models for model-image registration.
  • this approach has the specific benefit that a generated model has the geometry of both the bone and the implant, which can result in much better measurement performance than having an implant-only model for kinematic analysis.
  • the system can use those models for measurements and reduce the total radiation exposure required to conduct the examination.
  • Output of model-image registration measurement processing may be the 3D positions and orientations of the observed bones, implants and/or other objects during the activities constituting the kinematic examination. This information can be used to generate a range of tabular, statistical, graphic, and animated displays of joint kinematics. Clinicians and surgeons can use these for diagnoses, surgical planning, post-operative assessment of a patient’s joint function, and/or any other desired applications.
  • aspects described herein may perform various tasks, examples of which are examination protocols (such as TKA examination protocols detailed in some examples herein) as well as a wide range of other kinematic examination protocols focused on any of varying joints, ranges of motion, and activities of daily living.
  • examination protocols such as TKA examination protocols detailed in some examples herein
  • aspects can more generally be applied to a variety of clinical, procedural, and surgical settings, providing clinicians with the ability to analyze the kinematics of any joint in any pre-, intra-, or post-operative state and an autonomous analysis of the images to provide accurate measurements of joint kinematics.
  • Examples may also be applied to surgical navigation, wherein a surgeon utilize aspects described herein to locate the pose of the patient anatomy (i.e., bone) in 3D space in order to effect some procedure on that anatomy (i.e., bone).
  • a provided image chain includes a fluoroscopic class system including an x-ray pulse generator system, a solid-state x-ray image receptor, and a computer system to perform activities that include coordinating the function of the x-ray generator and the image receptor.
  • the x-ray generator system can generate suitably ‘bright’ x-ray pulses with a minimum duration, such as 10 milliseconds (ms) or less, and with the ability to generate a minimum (for instance at least 10) pulses per second.
  • the image receptor system can be or include a solid-state x-ray imaging panel.
  • An example imaging panel may be at least 12” x 10” in dimension, and may be capable of acquiring images at a minimum number (for instance 10) of frames per second.
  • pixel binning such as 2 x 1 pixel binning, may be acceptable.
  • the computer system may include a processing circuit, such as one or more processors, for instance one or more central processing units, adequate memory and, optionally, auxiliary processing resources, such as one or more graphic processing units, for compute-intensive image manipulations.
  • Kinematic measurements may be performed using model-image registration.
  • a process can virtually position a 3D model of the anatomy (and/or implant geometry) in an equivalent x-ray projection geometry, and generate an artificial x-ray image based on the model’s virtual position and orientation in space relative to the virtual imaging components.
  • the artificial x-ray projection can then be compared to the acquired x-ray image of the joint.
  • This process can be iteratively repeated until the two projections closely align/match (e.g., within a predetermined tolerance). That match provides the 3D location and orientation of the bone and/or implant in space.
  • the process can be repeated for each image acquired as part of the kinematic examination.
  • the process can also be used for surgical navigation.
  • information used to perform this process can include (1) the geometry of the x-ray projection, (2) the fluoroscopic image(s), and (3) a 3D model of the bone or implant whose kinematics are being measured.
  • Some aspects described herein relate to (3) - the 3D anatomic model -and more specifically to approaches that overcome significant hurdles encountered in current practice and conventional solutions for model-image registration to measure kinematics and the pose of bones and/or implants in 3D space.
  • FIGs. 1A-1B and FIG. 2 depict example imaging environments to incorporate and/or use in accordance with aspects described herein.
  • An upgraded C-arm fluoroscopy system as in FIG. 1A an upgraded C-arm fluoroscopy system with a monitor cart as in FIG. IB, and a portable U-arm fluoroscopy system as in FIG. 2 are provided as non-limiting examples.
  • These environments provide a combination of hardware and software to facilitate kinematic examination in similar manners as described.
  • FIGs. 1A-1B and FIG. 2 are just examples; any suitable imaging platform can be included.
  • a stationary U-arm sometimes described as a floor-mounted fluoroscopic system, or a stationary C-arm, may provide a suitable environment.
  • the imaging, analytical, and computational capabilities described herein can be hosted on a variety of physical positioning systems, and the illustrated examples presented and described herein are not meant to limit the present disclosure or exclude other positioning systems.
  • the example environment 100 includes a C-arm platform having a surgical C-arm 106.
  • An example of such platform is the OEC line of devices offered by GE Medical Systems Inc. that may have an upgraded image chain to perform aspects described herein.
  • the image chain may endow the system with the capabilities required for a kinematic examination of a joint in a clinical or surgical environment.
  • the C-arm includes a detector 102 and an imaging device 104.
  • the imaging device 104 may be or include an x-ray source.
  • the detector 102 may be any appropriate shape and dimension. In the illustrated embodiment of FIG. 1A, the detector 102 has a circular planar- surface. In this embodiment, the operation of the C-arm 106 may be partly or wholly controlled by a user control panel 108.
  • the environment 100 also includes one or more computer systems 110, shown here as being contained within a housing that is the integral with the rest of the components.
  • a separate display may be in wired or wireless communication with the computer system(s) 110 to present interfaces, such as those to display fluoroscopic outputs/images, 3D anatomic models, live video, targeting graphics, navigation screens, and UI elements for users to interact with software executing on the computer system(s) 110.
  • an upgraded C-arm platform may be used in conjunction with a monitor cart, as shown in FIG. IB.
  • the illustrated embodiment of FIG. IB includes components similar (and similarly identified) to those described above with reference to FIG. 1 A, i.e., a surgical C-arm 106, a detector 102, an imaging device 104, a user control panel 108, and computer system(s) 110.
  • the detector 102 is rectangular.
  • the example environment 100 of FIG. IB also includes a monitor cart 114 having other computer system(s) 111 and a display 112.
  • the display(s) 112 are coupled to computer system(s) 111 to present interfaces.
  • Computer systems 110 and 111 may communicate with each other via one or more wired and/or wireless connections.
  • FIG. 2 depicts yet another example environment 200 having components similar to those described above with reference to FIGs. 1A and IB.
  • the embodiment of FIG. 2 includes a mobile U-arm fluoroscopy system.
  • the example environment 200 of FIG. 2 includes a U-arm 216, a detector 202, an imaging device 204, a user control panel 208, one or more computer system(s) 210, and a display 212.
  • the embodiment of FIG. 2 is similar' to that of FIG. 1A in that certain processing and display capabilities may be provided on the C/U-arm device itself rather than on a separate monitor cart as in FIG. IB.
  • the U-arm 216 may have two axes for positioning the imaging components: a motorized vertical lift column to raise and lower the U-arm 216, and a passive mechanical pivot to allow the U-arm 216 to be tilted with respect to the ground.
  • the imaging components (notably 202, 204) can be oriented parallel to the ground, vertically with respect to the ground, and anywhere between the two. Motion relative to both axes can be measured and monitored by one or more computer system(s), such as system(s) 210, so that the height and tilt of U-arm 216 can be recorded at the moment every fluoroscopic image is acquired.
  • FIGs. 3A-3C depict an example of manual tomography in a clinical setting in accordance with aspects described herein.
  • an upgraded C-arm platform and monitor cart 314 are provided. More particularly, example environment 300 includes an upgraded C-arm platform with a surgical C-arm 306 with the capabilities for a kinematic examination of a joint in a clinical environment.
  • C-arm also includes a detector 302 and imaging device 304, for instance an x-ray source.
  • the operation of the C-arm 306 may be partly or wholly controlled by a user control panel 308.
  • operator 330 manually moves the C-arm.
  • example environment 300 may include a fiducial reference 322 used in detecting motion and orientation, and determining viewing geometries as the C-arm is rotated by operator 330.
  • fiducial reference 322 may include a cube that is printed with a grid of holes and embedded with lead BBs. Portions of fiducial reference 322 may be made from radiotransparent plastic, such that only the beads (i.e., BBs) are ‘visible’ in the radiographic image. In examples, some portions of the plastic, for instance, the internal structure, that may be visible in the radiographic image can be used to confirm the principal axis of the radiographic image. There may be some situations in which the number of BBs is undesirably high or too overlapping in the images (for instance depending on the viewing angle to the reference), and thus other designs may be used.
  • fiducial reference 322 may include a cylinder printed with a grid of holes.
  • a clear x- and y- axis may be defined to disambiguate the orientation of the cylinder.
  • the cylinder may include an infill that optimizes the radio-opacity of the plastic. While a cylindrical shape is contemplated, any other appropriate configuration, such as checkboards or April tags, may be used, in examples.
  • Example configurations may be understandable to a wide range of algorithms. Moreover, example configurations may be both optically and radiographically visible, and can be used to co-register the x-ray beam to any cameras within the x-ray volume.
  • Fiducial references can be effective in many applications, though, being physical objects, might give rise to additional sterilization procedures or waste (such as sterilized bags) if used in the operating room.
  • IMU Inertial Measurement Unit
  • IMU(s) can be included regardless of the composition of the x-ray beam, and they represent just a one-time cost.
  • Example IMU(s) may be used in conjunction with Kalman filter sensor fusion techniques to provide precise measurements to any required devices.
  • two to three 9-axis IMUs may be included for robust sensor acquisition.
  • the IMU(s) may also provide data for kinematic modeling of the x-ray device to provide noise resistance and/or integration of video feedback to provide additional motion feedback. It is also noted that quantification of requirements of noise resistance might inform additional/changed configurations. Motion estimation may become simpler through the reduction of inputs, or potentially more complicated through the introduction of instrumented axes or sensors on multiple parts of the c-arm (as examples) in order to properly analyze vibration. Regardless of the implementation, IMU(s) may provide continuous 9-axis motion feedback of the x-ray tube and detector positions.
  • data from the IMU(s) may be provided through an application programming interface (API) or directly through DICOMs.
  • API application programming interface
  • DICOM may contain headers related to the position of the acquisition.
  • the API and DICOM data together may enable a motion estimation system to execute for 3D reconstruction.
  • CBCT cone-beam computed tomography
  • FDK Feldkamp- Davis-Kress
  • the algorithm may be particularly efficient in converting two-dimensional x-ray projections into three- dimensional images.
  • CBCT systems employing the FDK algorithm typically require a minimum of 180-degree arcs (plus a detector half- width), and do so using a specialized, very rigid, and motorized C arm. These CBCT-capable mobile C arms are expensive and difficult to manage in operating rooms.
  • Manual Tomography acquires 2D X-ray projections by manually rotating the C arm, or any appropriate mobile/movable imaging device, through one or more arcs.
  • This method acquires fluoroscopic images by employing a range of algebraic reconstruction techniques (ART). These methods can be utilized in scenarios that require higher image quality or that are limited in available angle projections. These iterative methods can improve image quality by refining the reconstruction through multiple iterations. Meanwhile, the minimum needed limited-angle projection for ART can vary depending on the specific application and desired image quality. Generally, a range of 90° to 180° is often sufficient to achieve a desired balance between image quality and computational efficiency for an individual planar arc. However, using multiple non-coplanar arcs of less than 90 or 180 degrees can provide good image quality and computational efficiency, and is a novel aspect of the devices and methods disclosed herein.
  • Most of the installed base of mobile C arms are at or near their end of life and feature analog imaging components such as image intensifiers and Cathode Ray Tube (CRT) monitors.
  • analog imaging components such as image intensifiers and Cathode Ray Tube (CRT) monitors.
  • CRT Cathode Ray Tube
  • aspects described herein can replace the analog imaging chain components with a digital flat-panel detector, modem flat-panel display device, high-resolution touch-screen monitors, and a computer system to perform desired processing.
  • the aspects described herein include instrumentation to track the position of the C arm during image acquisition to facilitate ART image reconstruction.
  • the C arm or any appropriate mobile imaging device, can be manually rotated around a specific anatomical region, where images captured along, e.g., two orthogonal 45-degree arcs (Cranial- Caudal and Lateral), can be obtained to facilitate 3D acquisition of small volumes, such as a patient elbow, knee, or other anatomy.
  • images captured along e.g., two orthogonal 45-degree arcs (Cranial- Caudal and Lateral)
  • 3D acquisition of small volumes such as a patient elbow, knee, or other anatomy.
  • FIGs. 4-8 present example conceptual workflows to help illustrate aspects described herein, for instance kinematic evaluation of joints and other aspects.
  • FIG. 4 depicts an example conceptual workflow for manual tomography in accordance with aspects described herein.
  • the concept of ‘manual tomography’, or ‘arbitrary tomography’ referred to herein is provided and used to create a three-dimensional reconstruction of patient anatomy.
  • the example workflows include a combination of physical events, software steps referring to processing performed by one or more computer system(s) such as those of or in communication with a C-arm or apparatus having imaging device(s), x-ray events, and procedural information.
  • processing of the workflow is performed by a computer system onboard the imaging system/C-arm apparatus and/or remote computer system(s), such as those outside of the clinical or surgical environment or in the cloud, as examples.
  • a patient sits on a table, stands, or is otherwise positioned in any desired examination position, with the patient anatomy being fixed during the imaging and registration (412).
  • An operator or technician manually moves the imaging device (e.g., by way of rotating C-arm, for instance) (410) in trajectories, whether arbitrary or planned, to acquire a bundle of X-ray projections as ‘Live fluoroscopy’ (414).
  • the manual displacement of the imaging device e.g., by C-arm movement
  • the acquisition of X-ray images can be repeated one or more instances, reflected by the loop between live fluoroscopy 414 and the technician’s manual adjustment 410.
  • the live fluoroscopy (414) can provide fluoroscopic snapshot(s) that provide a last-image hold (LIH) 416, shown on a display.
  • the display could be a display viewed by the operator or another display.
  • the live fluoroscopy facilitates reconstruction of a three-dimensional patient anatomy. Accordingly, the system performs cone beam computed tomography (CBCT) calculations to generate a 3D reconstruction of the patient anatomy (418), and presents the determined 3D model to a 3D model viewer 420.
  • CBCT cone beam computed tomography
  • Such viewer could be provided as software that displays the model in an interface of a display. Additionally, this may be presented in conjunction with the LIH (416) to display the 3D model on an interface also with fluoroscopic image(s).
  • the system performs automatic landmarking and reference system embedding of the patient anatomy (422), for example of the patient bone.
  • the automatic landmarking and reference system embedding can be incorporated into the 3D model viewer (424) for viewing with the 3D model(s).
  • An annotated 3D model of the patient anatomy with reference geometries can thereby be generated and provided. It can also be saved into electronic medical records (EMRs) and picture archiving and communication system (PACS) components (426).
  • EMRs electronic medical records
  • PES picture archiving and communication system
  • the system can provide audio, visual, and/or other forms of cues to guide scanning, for instance to commence, resume, halt, etc. scan activities. This can be provided so as to guide the operator in acquiring images at desired approximate angular increments, positions, or the like until a desired arc of motion or any other positional requirement(s) have been reached. Additionally, or alternatively, this can be repeated for additional arcs/positional requirements as desired or necessary.
  • Example guidance can guide a user to add a cranial/caudal arc to a transverse arc, for instance.
  • the system can then compute a three-dimensional reconstruction of the imaged anatomic volume (and optionally any other desired, imaged objects), and give orthogonal views (e.g., three) of slices and a three-dimensional rotatable/sliceable view of the three- dimensional anatomy (or other object(s)) based upon point density maps, for instance.
  • the concept of kinematic tomography is employed to project a three-dimensional reconstruction on a sequence of images to determine where a physical object (e.g., bone, patient anatomy, implant, etc.) is in space, and calculate the joint motion of moving patient anatomy.
  • a physical object e.g., bone, patient anatomy, implant, etc.
  • the workflow of FTG. 5 incorporates aspects discussed above with reference to FIG. 4, namely as elements 510, 512, 514, 516, 518, 520, 522 and 524 of FIG. 5.
  • the workflow of FIG. 5 outputs 3D model(s) and reference geometry (526), similar to the workflow of FIG. 4.
  • the workflow of FIG. 5 continues, however, with the patient performing functional motion (e.g., a physical activity) (528) while the joint is dynamically imaged (528).
  • This provides additional live fluoroscopy (530), with the system recording sequential fluoroscopic frames/images (538), providing the live fluoroscopy on the display (532).
  • processing provides model-image registration calculations for relevant bones and/or other object(s) to determine their kinematics (536).
  • the kinematics results can be assembled and integrated (534) with the live fluoroscopy 532 on the display to provide animations and graphs displaying the determined joint kinematics.
  • three-dimensional animations are generated of the motion of the anatomic region of interest.
  • a three-dimensional model is generated, then if patient anatomy is repositioned, a determination of where the three- dimensional model is in space and how it is oriented can be performed with only one snapshot.
  • the 3D model(s), annotations, reference geometry and kinematics can be saved to PACS or EMRs (540) if desired for recording this data.
  • a five-second horizontal sweeping pulsed fluoroscopic scan can be taken of a weightbearing knee joint of a patient.
  • Patient anatomy and/or implants can then be imaged using pulsed C-arm fluoroscopy as explained herein while the patient stands, kneels, squats, or performs gait motion activities such as climbing stairs or sitting in a chair.
  • Fully autonomous software can use a limited-arc cone beam reconstruction method to create three- dimensional models of the femur and tibia/fibula bones, with or without implants, and then perform model-image registration to quantify the three-dimensional knee kinematics with a practical level of accuracy.
  • the example workflow illustrated in FIG. 5 can be accomplished by an individual radiology technician and appropriate equipment in 5-10 minutes, and does not require additional equipment for gait motion activity beyond a stair or chair (as examples).
  • Image analysis can be performed by a computer system of (e.g., onboard) or in communication with the imaging device (e.g., an upgraded C-arm).
  • the image data is provided in real-time or near real time to a cloud system for calculation/analysis, the results of which may be transmitted back to the imaging apparatus and/or used in an interface for user interaction.
  • Weightbearing kinematics affect knee function pre- and post-TKA.
  • Presented herein is an approach that leverages imaging hardware and software to implement an efficient examination protocol for accurately assessing 3D knee kinematics. This enables, for example, dynamic 3D knee kinematics as a component of the routine clinical workup for patients with diseased or replaced knees.
  • FIG. 6 depicts an example conceptual workflow for fluoroscopy -guided navigation for glenoid implant alignment during total shoulder arthroplasty in accordance with aspects described herein. More specifically, FIG. 6 provides fluoroscopy-only navigation for glenoid drilling in a total shoulder arthroplasty procedure.
  • a patient is positioned (on a table for instance), the joint dissection is completed, the humeral head is removed, and the fluoroscopic shot is aligned for an anterior-posterior (AP) view (612).
  • the surgeon places a k-wire at the patient glenoid (614) and commences live fluoroscopy (616) by taking a fluoroscopic exposure of the joint.
  • AP anterior-posterior
  • An L1H image is then presented on a display (618).
  • the workflow then optionally performs aspect 619 for scapular pose estimation, in which an operator manually identifies a plurality of anatomic landmarks on the LIH image (620), the LIH image and identified landmarks are presented on the display (622), and the system generates an initial 3D anatomic pose estimate of the scapula (624) based on the landmarks and leveraging procedural information (608) including pre-operative projection geometry (628) and/or a pre-operative bone model and drill plan (630).
  • the 3D anatomic model is presented on the display (626) with the LIH.
  • the workflow can use the pre-operative projection geometry (628) and/or pre-operative bone model and drill plan (630) to produce a refined 3D anatomic pose using model-image registration (632).
  • a second fluoroscopic exposure is taken (636) and presented on the display with the 3D anatomic model and drill plan (638).
  • the surgeon or operator can then adjust the drill path to match the drill plan (640).
  • the system is equipped to superimpose the direction in which the surgeon should drill a Kirschner wire (k-wire) based on the C-arm from at least two different positions/orientations.
  • the steps can be repeated, wherein the imaging device is repositioned to a new, second position (642), and a fluoroscopic exposure can be taken from the second position.
  • No calibration or attached hardware is required to perform the example workflow according to this embodiment.
  • FIG. 7 Another embodiment of the disclosure, and as is conceptually illustrated by FIG. 7, depicts a workflow for fluoroscopy-guided navigation and C-arm displacement sensing for glenoid implant alignment during total shoulder arthroplasty, in accordance with aspects described herein, and more specifically for navigation of glenoid drilling in total shoulder arthroplasty.
  • a patient is positioned (on a table for instance), the joint dissection is completed, the humeral head is removed, and the fluoroscopic shot is aligned for an anterior-posterior (AP) view (714).
  • An optional manual tomography (711) is performed, as described herein, to obtain the three-dimensional geometry of a patient’s anatomy, generating bone model(s) and a surgical plan intraoperatively.
  • an operator or technician manually moves the imaging device (e.g., by way of C-arm movement) in somewhat arbitrary trajectories and acquires a bundle of X-ray projections to permit reconstruction of the three-dimensional anatomy.
  • the manual displacement of the imaging device by an operator and the acquisition of X-ray images can be repeated in one or more instances, taking fluoroscopic exposure(s) 716
  • a last-image hold (LIH) image is presented on a display (718) for convenience.
  • the system performs CBCT calculations to generate a 3D reconstruction of the patient anatomy (720). These calculations arc utilized to generate a 3D anatomic model, which is presented on the display with the L1H image (722).
  • Interactive on-screen planning can provide procedural information (708) such as pre-operative projection geometry (730) and/or pre-operative bone model and drill plan information (728).
  • the system can generate a drill plan.
  • the LIH image, 3D model, and drill plan can be presented on the display (726).
  • Aspect 711 can be optionally used to generate procedural input(s) 708 as shown.
  • the pre-operative projection geometry (730) and pre-operative bone model and drill plan (728) are used to produce a refined 3D anatomic anatomical (e.g., scapular) pose using model-image registration and imaging device positioning information (732).
  • the LIH image, refined model and drill plan are presented on the display (734). In this embodiment, a pre-operative scan is not required.
  • a surgeon or operator may make an initial placement of a k-wire (736).
  • a fluoroscopic exposure is taken (738) and the resulting fluoroscopic image is presented on the display with the refined 3D model and drill plan (740).
  • the surgeon can adjust the drill pose, positioning, properties, etc. to match the drill plan (742).
  • the surgeon or operator can rotate the imaging device (i.e., by rotating the C-arm) (744), with the rotation being measured and serving as fccdback/input (708) for model-image registration performed at 732.
  • autonomous model/image registration is possible based on the enabled CBCT in the workflow of FIG. 7.
  • Benefits of the embodiment illustrated in FIG. 7 may include, but are not limited to, providing an inexpensive augmentation of a basic fluoroscopy setup, providing a rotate and point- and-shoot procedure with no required setup, no required calibration, and no required attached additional hardware.
  • FIG. 8 illustrates an example application for navigating glenoid drilling in total shoulder arthroplasty.
  • manual tomography is performed using an optical system that is co-registered with an X-ray system, both of which share the same coordinate systems.
  • Surgical registration steps that require preoperative imaging (CT for instance) are not needed.
  • the patient is positioned (on a table for instance), the joint dissection is completed, the humeral head is removed, and the fluoroscopic shot is aligned for an anterior-posterior (AP) view (812)
  • AP anterior-posterior
  • the surgeon or operator places 3D fiducial arrays on the patient anatomy and/or drill (814).
  • a fluoroscopic exposure (816) is taken to obtain a last-image hold, which is presented on a display (818).
  • the surgeon or operator can rotate the imaging device (i.e., by rotating the C-arm) (820) and repeat the fluoroscopic exposure as needed to acquire images and perform CBCT calculations to generate a 3D reconstruction of the patient anatomy, which is co-registered with the fiducial array (822).
  • Interactive on-screen planning can provide procedural information (808) including pre-operative projection and navigation geometry (830) and a pre-operative bone model and drill plan (828).
  • the LIH image, 3D model, and drill plan are presented on the display (832), and the interactive planning enables the surgeon to make an initial placement of a k-wire (834).
  • the placement of the k-wire, and the procedure information 808 including the pre-operative projection and navigation geometry (830) and pre-operative bone model and drill plan (828) provide full 3D navigation of the drill and scapula using the LIDAR/optical array (836).
  • the LIH image, 3D model, drill plan, and navigation feedback can be presented on the display (838). Using the navigation feedback, the surgeon can adjust the drill pose, positioning, properties, etc. to match the drill plan (840).
  • optical markers are used to obtain spatial information of the patient anatomy and utilized when registering the model.
  • Marker arrays can be augmented so that they are easily to identify when conducting manual tomography.
  • marker arrays can be affixed to the target anatomy for manual tomography, and therefore the markers do not have to be registered in space.
  • Manual tomography is performed, X-ray/optical markers are presented on the surgical display, and the system builds a three-dimensional model of a patient’s anatomy. The surgeon can manipulate the joint to flex the knee, which movement/repositioning will be reflected on the display to show the flexed joint.
  • landmarking discriminates between the markers and the anatomy.
  • a LiDAR system can be used in conjunction with an X-ray system. This would mean markers are not needed to obtain spatial information of the patient anatomy; landmarks may be patient anatomy, for example, bone.
  • This three-dimensional surgical navigation approach can generate anatomic model(s) and a surgical plan intraoperatively. Radiation may be used relatively sparingly to develop the bone model and plan. Moreover, an operator can track bone movement in real-time using LiDAR optical tracking/motion capture without additional radiation.
  • FIG. 9 depicts an example representation of relationships between and among aspects described herein for performing imaging tasks based on manual movement of a radiographic imaging system. Arrows between a pair of items in FIG. 9 represent flow, communication of data sharing, and/or other interaction between the items.
  • Example ‘inputs’ 902 include patient anatomy, EMR and PACS information, physician orders, and application (‘app’)/module development by app developers (which could be a provider of the system/processes discussed herein and/or third-party app developers).
  • Software 904 provides an application interface which can include elements/modules from an application store.
  • Imaging hardware 906 includes a clinical imaging system, which could be or include a C-arm or U-arm system as examples, and a surgical imaging system.
  • Various image, measure, and procedure record data 908 includes data of radiographs, fluoroscopic images and 3D CBCT, guided injections, kinematic studies, and optically guided 3D surgical procedures.
  • Software 910 which could be the same or different as software 904, includes an application interface, which could be the same or different from the application interface of software 904, which provides outputs 912, for instance EMR and PACS outputs, and usage fees/billing information.
  • Imaging and procedural aspects described herein for use in surgical applications can be provided by hardware, software, or a combination of the two.
  • Example systems and methods can use cone-beam CT and machine learning (ML)-based methods to define a three-dimensional model of patient bone(s) and/or other objects such as implants or surgical instruments from a sweep of fluoroscopic images about the patient anatomy.
  • inputs to a surgical system can include information provided by the patient, physician orders, electronic medical records (EMRs), and picture archiving and communication system (PACS) information.
  • Blur free images of joints in motion can be taken using X-ray pulses.
  • X-ray pulses can be any appropriate duration, for instance 10-15 milliseconds in duration.
  • the surgical system can be equipped with, e.g., a 15 kilowatt (kW) pulsed generator/tube, and fluoroscopic modes including, but not limited to, High Level Fluoroscopy (HLF) and Spot fluoroscopy.
  • HPF High Level Fluoroscopy
  • HPF
  • sensors used in a surgical or clinical system may include a detector such as a flat panel detector (FPD).
  • FPD flat panel detector
  • a face of a flat panel detector can have a dimension, for example, of 10”xl2”, 12”xl2”, or 17” x 17”, or any other desired dimension. Larger detectors can capture more natural motions and make observations much easier to capture. Furthermore, providing a relatively large digital detector without pin-cushion distortions improves measurement accuracy.
  • Joint encoders can be used in conjunction with an inertial measurement unit (IMU) as a sensor in an embodiment of the surgical system.
  • Sensors used in the surgical or clinical system can further include a live, bore sight video camera and docking optical navigation. The live video camera can be used for visual overlay navigation, in examples. Additionally, surgical navigation tracking sensors in 6 dcgrccs-of-frccdom for multiple items (C-arm, instruments, patient) can be provided.
  • An example surgical or clinical system is equipped with computational resources, for instance processing circuit(s), such as central processing unit(s) (CPUs) and graphics processing unit(s) (GPUs), memory, and program instructions for execution by the processing circuit(s) to perform aspects described herein.
  • GPUs can, for instance, be provided to offload certain processing tasks from the CPU(s) to the GPU(s), for instance to process specialized applications to perform various surgical tasks, including surgical navigation based on artificial intelligence (AI)/ML- enabled three-dimensional bone registration.
  • AI artificial intelligence
  • the onboard CPU(s)/GPU(s) can complete the desired analysis before uploading to a PACS, or another robust cloud computing connection.
  • the system can be equipped with networking hardware for communication with external components, such as other systems on a local area network or over the internet (for cloud computing access, for example).
  • the system may be equipped to use ML and numerical methods to autonomously perform 3D model-image registration to measure kinematics, in embodiments.
  • the surgical or clinical system includes multiple different functionalities.
  • Functionalities of the system include, but are not limited to, fluoroscopy, video guided fluoroscopy, manual CBCT, dynamic motion acquisition, three-dimensional registered model/image, imagebased optical navigation for surgical guidance, and generation of spot images.
  • Other functionalities of the surgical or clinical hardware system could also include Optical Motion Capture and/or LiDAR.
  • the surgical or clinical system is also capable of generating outputs, including, for instance, fluoroscopic images, still images, spot images, anatomic volumes, annotated three-dimensional models of patient anatomy, procedural records and measurements. Measurements can, for example, confer geometric or kinematic information relating to a patient’s anatomy and/or an implant.
  • the surgical or clinical system for imaging includes a C-arm and a monitor cart.
  • the system is plugged into a power source (i.e., electrical outlet) and/or obtains power from a battery to operate.
  • a power source i.e., electrical outlet
  • an optical navigation kit can be installed in a drawer on the monitor cart, to be physically attached to the c-arm, and ready to be used under battery power and Bluetooth communication for procedures requiring this instrumentation.
  • the system can further include medical displays that provide, for instance, overlay of fluoroscopic outputs, live video, targeting graphics, and navigation screens as described herein.
  • individual navigation and image analysis programs can be downloaded and run on the surgical or clinical software platform.
  • Imaging and procedural aspects described herein for use in clinical applications can be provided by hardware, software, or a combination of the two.
  • Example systems and methods can use cone-beam CT and machine learning (ML)-based methods to define a three-dimensional model of patient bone(s) and/or other objects such as implants or surgical instruments from a sweep of fluoroscopic images about the patient anatomy.
  • inputs to the clinical system can comprise information provided by the patient, physician orders, electronic medical records (EMRs), and picture archiving and communication system (PACS).
  • EMRs electronic medical records
  • PES picture archiving and communication system
  • Blur free images of joints in motion can be taken using X-ray pulses.
  • X-ray pulses can be any appropriate duration, for instance 5 milliseconds (ms) in duration.
  • the clinical system can be equipped with, e.g., a 4 kilowatt (kW) pulsed generator/tube, and fluoroscopic modes including but not limited to, High Level Fluoroscopy (HLF) and Spot
  • FIG. 10 depicts an example process for anatomy reconstruction and related tasks based on manual movement of an imaging device, in accordance with aspects described herein. The process could be performed by one or more computer systems, such as those described herein.
  • the process includes sequentially capturing (1002) a plurality of radiographic images of patient anatomy, for instance based on manual movement of an imaging device.
  • the patient anatomy includes bones of a joint of a patient.
  • the manual movement provides a plurality of different radiographic projections that correspond to the capture of the plurality of radiographic images.
  • the manual movement is dynamically selected and performed by an operator without/absent predefinition of the radiographic projections and without predefinition/absent of positioning of the imaging device about the patient anatomy. In this manner, the positions may be partly or wholly arbitrarily decided/determined by the operator.
  • the sequentially capturing includes monitoring the movement of the imaging device and recording positional information of the imaging device for each of the plurality of different radiographic projections and corresponding capture of the plurality of radiographic images.
  • the process continues by reconstructing (1004) the patient anatomy based on the captured plurality of radiographic images and the recorded positional information.
  • the reconstructing can provide a three-dimensional (3D) model of the patient anatomy.
  • the process additionally registers (1006)) the 3D model of the patient anatomy to the patient anatomy as reflected in the plurality of radiographic images, then builds and outputs (1008) an interface that includes the 3D model of the patient anatomy registered to the patient anatomy reflected in at least one radiographic image.
  • the at least one radiographic image is (i) a radiographic image of the plurality of radiographic images or (ii) a radiographic image obtained subsequent to the sequentially-capturing.
  • the building/outputting depicts the 3D model registered to multiple images (for instance as a progression of images as in an animation, video, or the like)
  • the multiple images could include images of the captured plurality of radiographic images, images obtained/captured subsequent to capturing the plurality of images, or a combination of both.
  • the process could halt based on building and outputting the interface.
  • the process returns to 1002 to capture additional image(s) - optionally just one or a collection, and optionally based on further movement of the imaging device or while the imaging device remains stationary. In this manner, the process can iterate one or more times. Aspects of the reconstructing, registering, and building/outputting when repeated could be performed differently, for instance building off of prior iterations thereof, as appropriate.
  • the process returns to 1002 to sequentially capture additional radiographic images of the patient anatomy as the patient performs physical movement.
  • the additional radiographic images can therefore reflect anatomical movement of the patient anatomy based on the physical movement.
  • the process can register the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images, and update the interface to provide an animated (for instance as a video or other sequence of images) representation of the anatomical movement.
  • the process can determine kinematic properties of the patient anatomy based on registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images and on the anatomical movement as reflected by the additional radiographic images, and update the interface to indicate the determined kinematic properties.
  • the building of 1008 includes in the interface a plan, registered to the patient anatomy as reflected in the plurality of radiographic images, for surgical tool interaction with the patient anatomy. Additionally, in another example in which aspects of FIG. 10 are iterated, the process returns to 1002 to sequentially capturing additional radiographic image(s) as a user introduces the surgical tool into a field of view of the imaging device such that the surgical tool is reflected in the additional radiographic images, and updates the interface to identify a position of the surgical tool in relation to the plan, to facilitate a desired positioning of the surgical tool for performing the surgical tool interaction with the patient anatomy.
  • the process optionally includes performing at least one of (i) annotating the 3D model of the patient anatomy with reference geometry or (ii) landmarking the patient anatomy as reflected in the plurality of radiographic images, and the building at 1008 includes in the interface annotations based on the annotating and/or landmarks based on the landmarking as the case may be.
  • Processes described herein may be performed singly or collectively by one or more computer systems. Such computer systems may be provided as part of a C-arm apparatus or other imaging system described herein, or may be in communication with such a system, as examples.
  • FIG. 11 depicts one example of such a computer system and associated devices to incorporate and/or use aspects described herein.
  • a computer system may also be referred to herein as a data processing device/system, computing device/system/node, or simply a computer.
  • the computer system may be based on one or more of various system architectures and/or instruction set architectures, such as those offered by Intel Corporation (Santa Clara, California, USA) or ARM Holdings pic (Cambridge, England, United Kingdom), as examples.
  • FIG. 11 shows a computer system 1100 in communication with external device(s) 1112.
  • Computer system 1100 includes one or more processor(s) 1102, which are processing circuit(s) such as central processing unit(s) (CPUs), graphics processing unit(s) (GPUs), and/or other types of processors.
  • processors can include functional components used in the execution of instructions, such as functional components to fetch program instructions from locations such as cache or main memory, decode program instructions, and execute program instructions, access memory for instruction execution, and write results of the executed instructions.
  • a processor 1102 can also include register(s) to be used by one or more of the functional components.
  • processors of the computer system can be arranged and/or leveraged in any of various ways to facilitate high-performance. For instance, certain processing tasks can be offloaded from the CPU(s) to GPU(s) for processing. Additionally or alternatively, processors, whether CPUs, GPUs, or other types, can be arranged for parallel/concurrent processing in some embodiments. In a specific example, processing tasks are offloaded to a group of GPUs for parallel execution on the GPUs of the group. It is also possible for a group of CPUs (which themselves might have multiple cores each) to execute various tasks in parallel.
  • Computer system 1100 also includes memory 1104, input/output (I/O) devices 1108, and I/O interfaces 1110, which may be coupled to processor(s) 1102 and each other via one or more buses and/or other connections.
  • Bus connections represent one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures include the Industry Standard Architecture (ISA), the Micro Channel Architecture (MCA), the Enhanced ISA (EISA), the Video Electronics Standards Association (VESA) local bus, and the Peripheral Component Interconnect (PCI).
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Memory 1104 can be or include main or system memory (e.g. Random Access Memory) used in the execution of program instructions, storage device(s) such as hard drive(s), flash media, or optical media as examples, and/or cache memory, as examples.
  • Memory 1104 can include, for instance, a cache, such as a shared cache, which may he coupled to local caches (examples include LI cache, L2 cache, etc.) of proccssor(s) 1102.
  • memory 1104 may be or include at least one computer program product having a set (e.g., at least one) of program modules, instructions, code or the like that is/are configured to carry out functions of embodiments described herein when executed by one or more processors.
  • Memory 1104 can store an operating system 1105 and other computer programs 1106, such as one or more computer programs/applications that execute to perform aspects described herein.
  • programs/applications can include computer readable program instructions that may be configured to carry out functions of embodiments of aspects described herein.
  • I/O devices 1108 include but are not limited to microphones, speakers, Global Positioning System (GPS) devices, cameras, graphics cards or GPUs, imaging devices, detector devices, lights, accelerometers, gyroscopes, magnetometers, sensor devices configured to sense light, proximity, heart rate, body and/or ambient temperature, blood pressure, and/or skin resistance, and activity monitors.
  • GPS Global Positioning System
  • An I/O device may be incorporated into the computer system as shown, though in some embodiments an I/O device may be regarded as an external device (1112) coupled to the computer system through one or more I/O interfaces 1110.
  • Computer system 1100 may communicate with one or more external devices 1 112 via one or more I/O interfaces 1110.
  • Example external devices include a keyboard, a pointing device, a display, and/or any other devices that enable a user to interact with computer system 1100.
  • Other example external devices include any device that enables computer system 1100 to communicate with one or more other computing systems or peripheral devices such as a printer.
  • a network interface/adapter is an example I/O interface that enables computer system 1100 to communicate with one or more networks, such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet), providing communication with other computing devices or systems, storage devices, or the like.
  • LAN local area network
  • WAN wide area network
  • public network e.g., the Internet
  • Ethernet-based (such as Wi-Fi) interfaces and Bluetooth® adapters are just examples of the currently available types of network adapters used in computer systems (BLUETOOTH is a registered trademark of Bluetooth SIG, Inc., Kirkland, Washington, U.S.A.).
  • the communication between I/O interfaces 1110 and external devices 1112 can occur across wired and/or wireless communications link(s) 1111, such as Ethernet-based wired or wireless connections.
  • Example wireless connections include cellular, Wi-Fi, Bluetooth®, proximity-based, near-field, or other types of wireless connections. More generally, communications link(s) 1111 may be any appropriate wireless and/or wired communication link(s) for communicating data.
  • Particular external device(s) 1112 may include one or more data storage devices, which may store one or more programs, one or more computer readable program instructions, and/or data, etc.
  • Computer system 1100 may include and/or be coupled to and in communication with (e.g. as an external device of the computer system) removable/non-removable, volatile/non- volatile computer system storage media.
  • it may include and/or be coupled to a non-removable, nonvolatile magnetic media (typically called a "hard drive”), a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk”), and/or an optical disk drive for reading from or writing to a removable, non-volatile optical disk, such as a CD-ROM, DVD-ROM or other optical media.
  • a non-removable, nonvolatile magnetic media typically called a "hard drive”
  • a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk")
  • an optical disk drive for reading from or writing to a removable, non-volatile optical disk, such as a CD-ROM, DVD-ROM or other optical media.
  • Computer system 1100 may be operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Computer system 1100 may take any of various forms, well-known examples of which include, but arc not limited to, personal computer (PC) system(s), server computer system(s), such as messaging server(s), thin client(s), thick client(s), workstation(s), laptop(s), handheld device(s), mobile device(s)/computer(s) such as smartphone(s), tablet(s), and wearable device(s), multiprocessor system(s), microprocessor-based system(s), telephony device(s), network appliance(s) (such as edge appliance(s)), virtualization device(s), storage controller(s), set top box(es), programmable consumer electronic(s), network PC(s), minicomputer system(s), mainframe computer system(s), and distributed cloud computing environment(s) that include any of the above systems or devices, and the like.
  • PC personal computer
  • server computer system(s) such as messaging server(s), thin client(s), thick client
  • aspects of the present disclosure may be a system, a method, and/or a computer program product, any of which may be configured to perform or facilitate aspects described herein.
  • aspects of the present disclosure may take the form of a computer program product, which may be embodied as computer readable medium(s).
  • a computer readable medium may be a tangible storage device/medium having computer readable program code/in struct! on s stored thereon.
  • Example computer readable medium(s) include, but are not limited to, electronic, magnetic, optical, or semiconductor storage devices or systems, or any combination of the foregoing.
  • Example embodiments of a computer readable medium include a hard drive or other mass- storage device, an electrical connection having wires, random access memory (RAM), read-only memory (ROM), erasable-programmable read-only memory such as EPROM or flash memory, an optical fiber, a portable computer disk/diskette, such as a compact disc read-only memory (CD-ROM) or Digital Versatile Disc (DVD), an optical storage device, a magnetic storage device, or any combination of the foregoing.
  • the computer readable medium may be readable by a processor, processing unit, or the like, to obtain data (e.g. instructions) from the medium for execution.
  • a computer program product is or includes one or more computer readable media that includes/stores computer readable program code to provide and facilitate one or more aspects described herein.
  • program instruction contained or stored in/on a computer readable medium can be obtained and executed by any of various suitable components such as a processor of a computer system to cause the computer system to behave and function in a particular manner.
  • Such program instructions for canying out operations to perform, achieve, or facilitate aspects described herein may be written in, or compiled from code written in, any desired programming language.
  • such programming language includes object-oriented and/or procedural programming languages such as C, C++, C#, lava, etc.
  • Program code can include one or more program instructions obtained for execution by one or more processors.
  • Computer program instructions may be provided to one or more processors of, e.g., one or more computer systems, to produce a machine, such that the program instructions, when executed by the one or more processors, perform, achieve, or facilitate aspects described herein, such as actions or functions described in flowcharts and/or block diagrams described herein.
  • each block, or combinations of blocks, of the flowchart illustrations and/or block diagrams depicted and described herein can be implemented, in some embodiments, by computer program instructions.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A process includes sequentially capturing radiographic images of patient anatomy based on manual movement of an imaging device. The manual movement provides different radiographic projections that correspond to the capture of the radiographic images, and the sequentially capturing monitors the movement of the imaging device and records positional information of the imaging device. The process also includes reconstructing the patient anatomy based on the captured radiographic images and the recorded positional information, to provide a three-dimensional (3D) model of the patient anatomy. The process registers the 3D model to the patient anatomy as reflected in the radiographic images, and builds and outputs an interface that includes the 3D model registered to the patient anatomy reflected in at least one radiographic image, which is a radiographic image of the radiographic images or is a radiographic image obtained subsequent to the sequentially-capturing.

Description

ANATOMY RECONSTRUCTION AND IMAGING TASKS BASED ON MANUAL MOVEMENT OF IMAGING DEVICES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. provisional application no. 63/562,025, filed on March 6, 2024, entitled "Dynamic Imaging and Method of Kinematic Evaluation of a Joint Using Fluoroscopy" (attorney docket no. 6352.001P1), and to U.S. provisional application no. 63/688,357, filed on August 29, 2024, entitled "System and Method For Reconstructing Three-Dimensional Anatomy Using Manual Movement of Radiographic Imaging Systems and Other Imaging Tasks" (attorney docket no. 6352.001P2), both of which are incorporated herein by reference in their entirety.
BACKGROUND
[0002] Aspects described herein relate generally to anatomic imaging of a joint using fluoroscopy. More particularly, aspects relate to dynamic imaging of a joint in three dimensions and performing various surgical, clinical, and procedural evaluations during the manipulation of the joint while utilizing the fluoroscopy images for guidance and measurement.
[0003] Traditional imaging systems can be used for registering patient anatomy in the clinic or in surgery. In a limited capacity, C-arm fluoroscopy can be used for navigation during various surgical procedures. Conventional techniques for determining the relative position and orientation of a patient’s anatomy rely upon model-image registration, which involves generating a three- dimensional model of the anatomy. Such model generation requires robust computational resources. Current systems lack the ability to efficiently generate a three-dimensional anatomical model intra- procedurally; rather, the three-dimensional anatomic model often comes from pre-operative imaging, such as a pre-operative Computed Tomography (CT) scan. When registration of patient anatomy is performed in conjunction with fluoroscopic imaging, it often requires a dedicated addon computer cart connected to the fluoroscopic system, or an offline calculation on a separate computer system.
[0004] The ability to obtain three-dimensional anatomic reconstructions using radiographic methods is limited to highly specialized fluoroscopy and CT systems that have been augmented with motors and control systems to execute precise scanning trajectories. These are very expensive, large, and inflexible systems. Moreover, the imaging capability of traditional C-arms is not used when the device is in motion. In some instances to perform three-dimensional reconstruction of patient’s anatomy, motorized imaging machines can be programmed to perform circular motion around the patient, acquiring images only at specific angular increments around the patient. A pulsed imaging capability is possible with the C-arm in motion, but these systems are not manually operated. In these instances, images are not captured with a user manually guiding the C-arm.
[0005] Moreover, current fluoroscopic technology is limited to producing static images when used, thereby inhibiting the clinician from evaluating the kinematics of a target joint. Using an example of a total knee arthroplasty (TKA) procedure, single-plane radiographic measurement of three- dimensional TKA kinematics are known to be too time-consuming or cumbersome to be used in a clinical workflow.
SUMMARY
[0006] Shortcomings of the prior art are overcome and additional advantages are provided herein. In one embodiment, a computer-implemented method is provided. The method sequentially captures a plurality of radiographic images of patient anatomy based on manual movement of an imaging device, the manual movement providing a plurality of different radiographic projections that correspond to the capture of the plurality of radiographic images, wherein the sequentially capturing includes monitoring the movement of the imaging device and recording positional information of the imaging device for each of the plurality of different radiographic projections and corresponding capture of the plurality of radiographic image. The method also reconstructs the patient anatomy based on the captured plurality of radiographic images and the recorded positional information, the reconstructing providing a three-dimensional (3D) model of the patient anatomy. The method further registers the 3D model of the patient anatomy to the patient anatomy as reflected in the plurality of radiographic images. The method also builds and outputs an interface that includes the 3D model of the patient anatomy registered to the patient anatomy reflected in at least one radiographic image, the at least one radiographic image being a radiographic image of the plurality of radiographic images or a radiographic image obtained subsequent to the sequentially- capturing. [0007] Tn one or more embodiments, the method also includes sequentially capturing additional radiographic images of the patient anatomy as the patient performs physical movement, where the additional radiographic images reflect anatomical movement of the patient anatomy based on the physical movement, and also includes registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images, and updating the interface to provide an animated representation of the anatomical movement. Additionally, the method can include determining kinematic properties of the patient anatomy based on the registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images and on the anatomical movement as reflected by the additional radiographic images, and updating the interface to indicate the determined kinematic properties.
[0008] In one or more embodiments, the building includes in the interface a plan, registered to the patient anatomy as reflected in the plurality of radiographic images, for surgical tool interaction with the patient anatomy. In some embodiments, the method further includes sequentially capturing additional radiographic images as a user introduces the surgical tool into a field of view of the imaging device such that the surgical tool is reflected in the additional radiographic images, and updating the interface to identify a position of the surgical tool in relation to the plan, to facilitate a desired positioning of the surgical tool for performing the surgical tool interaction with the patient anatomy.
[0009] In one or more embodiments, the method also includes performing at least one of annotating the 3D model of the patient anatomy with reference geometry or landmarking the patient anatomy as reflected in the plurality of radiographic images, wherein the building includes in the interface annotations based on the annotating or landmarks based on the landmarking.
[0010] In one or more embodiments, the patient anatomy includes bones of a joint of a patient.
[0011] In one or more embodiments, the manual movement is dynamically selected and performed by an operator without predefinition of the radiographic projections and without predefinition of positioning of the imaging device about the patient anatomy.
[0012] Additional aspects of the present disclosure are directed to systems configured to perform the methods described above and herein. For instance, a system can include an imaging device and detector arranged in a fixed position relative to each other, an arm to which the imaging device and detector arc attached, where movement of the arm repositions the imaging device and the detector in space and the imaging device and detector remain in the fixed position relative to each other, a display device, a memory, and processing circuit(s) in communication with the memory, ad the system can be configured to perform method(s), example aspects of which are recited above and herein.
[0013] The present summary is not intended to illustrate each aspect of, every implementation of, and/or every embodiment of the present disclosure. Additional features and advantages are realized through the concepts described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] Aspects described herein are particularly pointed out and distinctly claimed as examples in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosure are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
[0015] FIGs. 1A-1B depict example imaging environments to incorporate and/or use in accordance with aspects described herein;
[0016] FIG. 2 depicts an example imaging environment to incorporate and/or use in accordance with aspects described herein;
[0017] FIGs. 3A-3C depict an example of a clinical examination in accordance with aspects described herein;
[0018] FIG. 4 depicts an example conceptual workflow for manual tomography in accordance with aspects described herein;
[0019] FIG. 5 depicts an example conceptual workflow for a kinematic evaluation in accordance with aspects described herein; [0020] FIG. 6 depicts an example conceptual workflow for fluoroscopy -guided navigation for glenoid component alignment during total shoulder arthroplasty in accordance with aspects described herein;
[0021] FIG. 7 depicts an example conceptual workflow for fluoroscopy-guided navigation and C- arm displacement sensing for glenoid component alignment during total shoulder arthroplasty in accordance with aspects described herein;
[0022] FIG. 8 depicts an example conceptual workflow for fluoroscopy- and LiDAR- or optical- guided navigation and C-arm displacement sensing for glenoid component alignment during total shoulder arthroplasty in accordance with aspects described herein;
[0023] FIG. 9 depicts an example representation of relationships between and among aspects described herein for performing imaging tasks based on manual movement of a radiographic imaging system;
[0024] FIG. 10 depicts an example process for anatomy reconstruction and related tasks based on manual movement of an imaging device, in accordance with aspects described herein; and
[0025] FIG. 11 shows an example computer system to incorporate and/or use aspects described herein.
DETAILED DESCRIPTION
[0026] Computed Tomography (CT) scanners, special-purpose C-arms, and special purpose O- arms are used for acquiring images of three-dimensional (3D) anatomic regions in the clinic or in surgery. While C-arm fluoroscopy is used, to some extent, for surgical assessment and navigation, current C-arm technology often requires a separate wired connection to a host of expensive add-on technologies in order to augment the minimal capability set natively provided by the C-arm. Consequently, implementing advanced functions like image-guided surgery and model-image registration requires add-on carts that connect to the imaging system and add cost, complexity, capital outlays, additional maintenance demands, and operating room time. Moreover, these expensive single purpose devices take up space, time, and budget, and can result in greater patient exposure to radiation doses than necessary. [0027] Moreover, most conventional image-guided procedures require the patient to make a separate clinic visit to a non- surgical CT scanner or other imaging facility for acquisition of three- dimensional anatomic information. This incurs costs for scheduling, scanning, capital equipment, maintenance fees, floor space, and facilities. If the same or similarly-informative information could be obtained with an intraoperative scan, then the extra visit and costly infrastructure are not needed to execute the procedure, and the patient receives a lower radiation dose.
[0028] To determine the three-dimensional position and orientation (i.e., pose) of a patient’s anatomy, current techniques rely on model-image registration, a method that requires a three- dimensional model of the anatomy and a robust computational resource to implement the three- dimensional registration calculations. Typically, the three-dimensional anatomic model comes from a CT (or other 3D imaging, i.e., MRI) scan performed separately from the surgical procedure, and the registration is performed by an add-on computer call connected to the fluoroscopic system. Thus, if the three-dimensional reconstruction and registration can be performed directly using the fluoroscopic imaging system, then time, money, and X-ray dose can be saved.
[0029] The ability to obtain three-dimensional anatomic reconstructions using radiographic methods is limited, often necessitating CT scanners and highly specialized C-arm or 0-arm fluoroscopy systems that have been equipped with motors and control systems to execute precise, pre-planned circular scanning paths. These systems are very expensive, large, and inflexible. Aspects described herein provide more effective and efficient (e.g., less expensive, faster, lower X-ray-dose) solutions to acquire three-dimensional anatomic reconstructions using traditional C- arm fluoroscopy systems (or related architectures), facilitating useful surgical, clinical, and diagnostic applications.
[0030] For example, in surgical applications, a surgeon may wish to use fluoroscopic imaging to measure the placement of an implant, the alignment of a bone segment, or other anatomic three- dimensional spatial relationships. Without the ability to perform three-dimensional registration of the bones or implants, these measurements are negatively affected by the projective geometry of X- ray and make two-dimensional measures quite inaccurate. To overcome this limitation, conventional solutions leverage add-on devices that connect to the fluoroscopic system to digitize an analog image, perform calculations, merge images, and/or implement useful display mode(s). All these add-on devices add complexity, expense, time, and the potential for additional errors in the surgical procedure. A fluoroscopy system endowed with the capabilities to determine three- dimensional bone or implant pose, and to perform three-dimensional computations onboard the imaging system without any add-on carts, is desired and would provide a much more efficient tool in terms of time, cost, and avoidance of errors for accurate completion of surgical procedures.
[0031] Traditional C-arm imaging machines are never used to acquire images when in motion, and special-purpose C-arms are motorized imaging machines having a programmed path of circular motion around the patient’s anatomy, acquiring images at specific angular increments around the patient to perform three-dimensional reconstruction of the patient’s anatomy. However, none of these systems allow for images to be taken while a user manually pushes the C-arm. In order to execute this feature, the C-arm is to have an image acquisition capability (which does not exist on most systems) and features enabling continuous capture during manual operation. Some existing systems have pulsed imaging capability while the C-arm is in motion, but these systems are not manually operated. Further, many C-arms do not have instrumentation that allows the user to know where the C-arm is in space while images are acquired. Thus, provided herein, in some embodiments, is a specialized tomography device (for instance, a cone bean CT device) instrumented with a sensor package that accounts for C-arm displacement for arbitrary motions in order to register the position of the C-arm relative to the patient’s anatomy.
[0032] In some embodiments, an integrated and instrumented device is provided to accomplish advanced functions using C-arms in surgical, procedural, and clinical settings. Also provided are methods/processes for execution by computer system(s), for instance those of, or in communication with, a C-arm or imaging device as described herein to perform described aspects.
[0033] An imaging system is provided for reconstructing three-dimensional anatomy optimized for surgery, procedural, and clinical settings. Example imaging systems and methods of the present disclosure endow traditional two-dimensional fluoroscopic imaging systems with the ability to acquire three-dimensional anatomic reconstructions and perform other advanced imaging tasks. These may be provided without the addition of expensive motors, control systems and/or specialpurpose add-on elements (such as compute carts) for visualization or other purposes. Aspects presented herein modernize imaging devices (for instance, the C-arm) and create a real-time navigation and image analysis platform for use in, for example, clinical settings, orthopedics, and vascular surgery, as examples.
[0034] While traditional methods rely upon acquiring radiographic projections of the patient at precise angular increments as the X-ray hardware rotates around the patient in a circle or helix, the present disclosure recognizes that three-dimensional anatomic reconstructions can be acquired from an arbitrary distribution of radiographic projections if the spatial pose (i.e., position and orientation) of each projection is known. Further, the present disclosure describes an articulated X-ray positioning device that is instrumented to record the rotation/displacement of each joint, so that the spatial pose for every radiographic projection can be measured.
[0035] A digital X-ray imaging system is provided that is capable of acquiring images using pulsed/timed image acquisitions so that blur-free images are obtained. This may be performed while the X-ray positioning device is in motion. This allows the system operator to manually move the X-ray system over or around the target anatomy in approximate and/or somewhat arbitrary trajectories to acquire a bundle of X-ray projections from which the three-dimensional anatomy is reconstructed. Some embodiments provide sufficient computational resources onboard the imaging system to permit the rapid reconstruction of the three-dimensional anatomy without transferring the images to a separate device. Example imaging systems and methods also provide a technical specification and software interface that enables third party developers to create imaging applications on and/or for the instrumented imaging platform. Accordingly, pulsed image acquisition with large digital image detectors, instrumented positioner articulations, onboard computational capabilities, and a standardized software interface are proposed to provide a platform for implementing unique radiographic imaging methods, including three-dimensional anatomy reconstruction based on manual movement of the device, three-dimensional skeletal pose and kinematics measurement, surgical procedure guidance, and other methods. This suite of augmentations can extend a standard imaging device (for instance, the C-arm) into an all-in-one, instrumented device capable of performing a multitude of radiographic imaging modalities.
[0036] In clinical settings, it may be desired to have a fluoroscopic system available to produce dynamic images during the manipulation of a target joint. This allows healthcare practitioners to more fully evaluate the functionality of the joint. Conventional fluoroscopic technology is limited to producing static images when used, and therefore does not allow the healthcare practitioner to clinically evaluate the kinematics of the target joint. In the specific example of total knee arthroplasty (TKA), achieving more natural knee kinematics can lead to superior patient outcomes. However, this has not been accomplished for large representative cohorts of TKA patients, at least partially because accurately measuring three-dimensional TKA kinematics is time-consuming and expensive. Traditional techniques, such as single-plane radiographic measurement of 3D TKA kinematics, present distinct challenges particularly in that they are typically too time-consuming or cumbersome to be used in a clinical workflow. If 3D knee kinematics can be measured in a timeefficient, low-cost, and clinically practical manner, it would be possible to provide surgeons and clinicians with numerical parameters and graphical representations of knee movement that will be intuitively understandable and simple to relate to surgical approaches, implant choices, and postoperative evaluations. Thus, aspects provide facilities to assess how the TKA surgical procedure affects clinical outcomes. It is appreciated that aspects described herein with regard to TKA surgical procedure may be equally applicable to the arthroplasty of the shoulder, hip, elbow or other joints.
[0037] Advanced imaging systems and machine-leaming-enhanced analysis provided herein make it practical to measure knee kinematics pre-, intra- and post-operatively using radiographic methods. In one embodiment, the present disclosure provides hardware and software systems capable of executing image acquisition and measurement tasks to implement a protocol of TKA kinematic observation, or any type of protocol involving observation of human joints during activities of daily living.
[0038] Furthermore, to overcome the difficulty of obtaining blur-free images for highly dynamic kinematic examinations, a hardware and software solution is provided herein as a stand-alone portable U-arm system (or C-arm) for use, for example, in a clinic, or as an upgrade to common C -arm systems that can be used in the operating room, or stationary U-arm systems that can be used in the clinic. A common software environment operating on these platforms can provide image acquisition and analysis functions to implement, pre-operative planning, kinematic examinations, surgical guidance, needle guidance, etc., and the results of one examination can be shared with another compatible platform. For example, a kinematic exam can be used for pre-operative planning in a clinical setting, and the 3D anatomy and surgical plan can be transferred to a second imaging system in the operating room for execution of the surgical plan. [0039] Systems and methods disclosed herein may be or include various hardware architecture, at least one image chain, and a software environment, as examples. An imaging system according to one embodiment of the present disclosure may include an imaging platform with an image chain. The image chain may include a fluoroscopic class system that includes an x-ray pulse generator system, a solid-state x-ray image receptor and a computer system to coordinate the function of the x-ray generator and the image receptor. The imaging system may also include a computer software package for implementation with the computer system. A method of using the imaging system may include evaluating kinematics of a joint of a patient, pre-, intra-, and/or- post-procedurally. TKA may be an example procedure, before, during, or after which a patient’s knee can be kinematically evaluated in accordance with aspects described herein.
[0040] Although example kinematic examination described herein analyzes the motions of the knee joint as the subject patient anatomy, aspects described herein, including imaging systems and methods of the present disclosure, can be applied to any joint (or more broadly, patient anatomy and/or an implant) for determining its motion. Likewise, although specific procedural and surgical examples are provided, the present disclosure is applicable to any of varying other procedural and surgical examples, even those not explicitly stated herein.
[0041] Current approaches for performing kinematic measurements using model-image registration rely on 3D bone models from 3D medical imaging (i.e., CT or MR exams) or from implant geometry files from the device manufacturer. This presents very significant and practical impediments to performing the kinematic examination in a clinical workflow. Imaging systems of the present disclosure overcome this limitation because they have the capability to perform imaging, such as limited arc cone -beam computed tomography (CBCT) to create the bone and/or implant models. As described further herein, a kinematic examination may include a relatively brief scan of the patient’s joint in which the system operator sweeps the imaging system over/around the target joint while a collection of fluoroscopic images are recorded. The resulting images are used to reconstruct the 3D bones and/or implants of the joint (and any other objects of interest), and optionally models for model-image registration. For joints with metallic implants, this approach has the specific benefit that a generated model has the geometry of both the bone and the implant, which can result in much better measurement performance than having an implant-only model for kinematic analysis. In cases where the patient already has a suitable 3D bone or implant model available from a previous exam, the system can use those models for measurements and reduce the total radiation exposure required to conduct the examination.
[0042] Output of model-image registration measurement processing may be the 3D positions and orientations of the observed bones, implants and/or other objects during the activities constituting the kinematic examination. This information can be used to generate a range of tabular, statistical, graphic, and animated displays of joint kinematics. Clinicians and surgeons can use these for diagnoses, surgical planning, post-operative assessment of a patient’s joint function, and/or any other desired applications.
[0043] Aspects described herein may perform various tasks, examples of which are examination protocols (such as TKA examination protocols detailed in some examples herein) as well as a wide range of other kinematic examination protocols focused on any of varying joints, ranges of motion, and activities of daily living. Thus, while in some examples the described processes are directed to the examination of TKA kinematics, aspects can more generally be applied to a variety of clinical, procedural, and surgical settings, providing clinicians with the ability to analyze the kinematics of any joint in any pre-, intra-, or post-operative state and an autonomous analysis of the images to provide accurate measurements of joint kinematics. Examples may also be applied to surgical navigation, wherein a surgeon utilize aspects described herein to locate the pose of the patient anatomy (i.e., bone) in 3D space in order to effect some procedure on that anatomy (i.e., bone).
[0044] In some embodiments, a provided image chain includes a fluoroscopic class system including an x-ray pulse generator system, a solid-state x-ray image receptor, and a computer system to perform activities that include coordinating the function of the x-ray generator and the image receptor. To perform kinematic examination, the x-ray generator system can generate suitably ‘bright’ x-ray pulses with a minimum duration, such as 10 milliseconds (ms) or less, and with the ability to generate a minimum (for instance at least 10) pulses per second. The image receptor system can be or include a solid-state x-ray imaging panel. An example imaging panel may be at least 12” x 10” in dimension, and may be capable of acquiring images at a minimum number (for instance 10) of frames per second. In some examples, pixel binning, such as 2 x 1 pixel binning, may be acceptable. The computer system may include a processing circuit, such as one or more processors, for instance one or more central processing units, adequate memory and, optionally, auxiliary processing resources, such as one or more graphic processing units, for compute-intensive image manipulations.
[0045] Kinematic measurements may be performed using model-image registration. A process can virtually position a 3D model of the anatomy (and/or implant geometry) in an equivalent x-ray projection geometry, and generate an artificial x-ray image based on the model’s virtual position and orientation in space relative to the virtual imaging components. The artificial x-ray projection can then be compared to the acquired x-ray image of the joint. This process can be iteratively repeated until the two projections closely align/match (e.g., within a predetermined tolerance). That match provides the 3D location and orientation of the bone and/or implant in space. The process can be repeated for each image acquired as part of the kinematic examination. In examples, the process can also be used for surgical navigation.
[0046] Thus, information used to perform this process can include (1) the geometry of the x-ray projection, (2) the fluoroscopic image(s), and (3) a 3D model of the bone or implant whose kinematics are being measured. Some aspects described herein relate to (3) - the 3D anatomic model -and more specifically to approaches that overcome significant hurdles encountered in current practice and conventional solutions for model-image registration to measure kinematics and the pose of bones and/or implants in 3D space.
[0047] FIGs. 1A-1B and FIG. 2 depict example imaging environments to incorporate and/or use in accordance with aspects described herein. An upgraded C-arm fluoroscopy system as in FIG. 1A, an upgraded C-arm fluoroscopy system with a monitor cart as in FIG. IB, and a portable U-arm fluoroscopy system as in FIG. 2 are provided as non-limiting examples. These environments provide a combination of hardware and software to facilitate kinematic examination in similar manners as described.
[0048] The environments of FIGs. 1A-1B and FIG. 2 are just examples; any suitable imaging platform can be included. A stationary U-arm, sometimes described as a floor-mounted fluoroscopic system, or a stationary C-arm, may provide a suitable environment. The imaging, analytical, and computational capabilities described herein can be hosted on a variety of physical positioning systems, and the illustrated examples presented and described herein are not meant to limit the present disclosure or exclude other positioning systems. [0049] Tn the illustrated embodiment of FIG. 1 A, the example environment 100 includes a C-arm platform having a surgical C-arm 106. An example of such platform is the OEC line of devices offered by GE Medical Systems Inc. that may have an upgraded image chain to perform aspects described herein. For instance, the image chain may endow the system with the capabilities required for a kinematic examination of a joint in a clinical or surgical environment. The C-arm includes a detector 102 and an imaging device 104. In one example, the imaging device 104 may be or include an x-ray source. The detector 102 may be any appropriate shape and dimension. In the illustrated embodiment of FIG. 1A, the detector 102 has a circular planar- surface. In this embodiment, the operation of the C-arm 106 may be partly or wholly controlled by a user control panel 108. As the C-arm 106 moves, the motion of each translation and rotation axis may be measured and monitored so that the physical position and orientation of the imaging components (e.g., 102, 104) in space can be recorded at the moment each fluoroscopic image is acquired. The environment 100 also includes one or more computer systems 110, shown here as being contained within a housing that is the integral with the rest of the components. A separate display (not shown) may be in wired or wireless communication with the computer system(s) 110 to present interfaces, such as those to display fluoroscopic outputs/images, 3D anatomic models, live video, targeting graphics, navigation screens, and UI elements for users to interact with software executing on the computer system(s) 110.
[0050] Tn another embodiment, an upgraded C-arm platform may be used in conjunction with a monitor cart, as shown in FIG. IB. The illustrated embodiment of FIG. IB includes components similar (and similarly identified) to those described above with reference to FIG. 1 A, i.e., a surgical C-arm 106, a detector 102, an imaging device 104, a user control panel 108, and computer system(s) 110. Here the detector 102 is rectangular. The example environment 100 of FIG. IB also includes a monitor cart 114 having other computer system(s) 111 and a display 112. The display(s) 112 are coupled to computer system(s) 111 to present interfaces. Computer systems 110 and 111 may communicate with each other via one or more wired and/or wireless connections. Such communication could be direct (e.g., via wire(s) or through direct wireless connection, or indirect, for instance via one or more network(s) such as a local area network (LAN) or wide area network (such as the internet)). [0051 ] FIG. 2 depicts yet another example environment 200 having components similar to those described above with reference to FIGs. 1A and IB. The embodiment of FIG. 2 includes a mobile U-arm fluoroscopy system. The example environment 200 of FIG. 2 includes a U-arm 216, a detector 202, an imaging device 204, a user control panel 208, one or more computer system(s) 210, and a display 212. The embodiment of FIG. 2 is similar' to that of FIG. 1A in that certain processing and display capabilities may be provided on the C/U-arm device itself rather than on a separate monitor cart as in FIG. IB.
[0052] Any suitable environment may be capable of operating for the duration of an examination on battery power, facilitating ease of movement from room-to-room in a clinical and/or procedural environment. In the example of FIG. 2, the U-arm 216 may have two axes for positioning the imaging components: a motorized vertical lift column to raise and lower the U-arm 216, and a passive mechanical pivot to allow the U-arm 216 to be tilted with respect to the ground. The imaging components (notably 202, 204) can be oriented parallel to the ground, vertically with respect to the ground, and anywhere between the two. Motion relative to both axes can be measured and monitored by one or more computer system(s), such as system(s) 210, so that the height and tilt of U-arm 216 can be recorded at the moment every fluoroscopic image is acquired.
[0053] FIGs. 3A-3C depict an example of manual tomography in a clinical setting in accordance with aspects described herein. In the illustrated example of FIGs. 3A-3C, an upgraded C-arm platform and monitor cart 314 are provided. More particularly, example environment 300 includes an upgraded C-arm platform with a surgical C-arm 306 with the capabilities for a kinematic examination of a joint in a clinical environment. C-arm also includes a detector 302 and imaging device 304, for instance an x-ray source. The operation of the C-arm 306 may be partly or wholly controlled by a user control panel 308. In addition, as illustrated across FIGs. 3A-3C, operator 330 manually moves the C-arm. As the C-arm 306 moves, the motion of each translation and rotation axis may be measured and monitored by first computer system 310 of the C-arm and/or second computer system 311 of the monitor cart. The first computer system 310 of the C-arm and/or second computer system 311 of the monitor cart records the physical position and orientation of the imaging components in space at the moment each fluoroscopic image is acquired. In the illustrated example of FIGs. 3A-3C, patient anatomy 320 is clinically examined in accordance with aspects described herein following a TKA. [0054] As illustrated across FIGs. 3A-3C, example environment 300 may include a fiducial reference 322 used in detecting motion and orientation, and determining viewing geometries as the C-arm is rotated by operator 330. One example design of fiducial reference 322 may include a cube that is printed with a grid of holes and embedded with lead BBs. Portions of fiducial reference 322 may be made from radiotransparent plastic, such that only the beads (i.e., BBs) are ‘visible’ in the radiographic image. In examples, some portions of the plastic, for instance, the internal structure, that may be visible in the radiographic image can be used to confirm the principal axis of the radiographic image. There may be some situations in which the number of BBs is undesirably high or too overlapping in the images (for instance depending on the viewing angle to the reference), and thus other designs may be used.
[0055] For instance, to help avoid a problem associated with BB overlap, another example design of fiducial reference 322 may include a cylinder printed with a grid of holes. A clear x- and y- axis may be defined to disambiguate the orientation of the cylinder. The cylinder may include an infill that optimizes the radio-opacity of the plastic. While a cylindrical shape is contemplated, any other appropriate configuration, such as checkboards or April tags, may be used, in examples. Example configurations may be understandable to a wide range of algorithms. Moreover, example configurations may be both optically and radiographically visible, and can be used to co-register the x-ray beam to any cameras within the x-ray volume.
[0056] Fiducial references can be effective in many applications, though, being physical objects, might give rise to additional sterilization procedures or waste (such as sterilized bags) if used in the operating room. Thus, in other examples, at least one Inertial Measurement Unit (IMU) may be used for motion estimation of the x-ray generator/manipulator. IMU(s) can be included regardless of the composition of the x-ray beam, and they represent just a one-time cost. Example IMU(s) may be used in conjunction with Kalman filter sensor fusion techniques to provide precise measurements to any required devices. In embodiments, two to three 9-axis IMUs may be included for robust sensor acquisition. The IMU(s) may also provide data for kinematic modeling of the x-ray device to provide noise resistance and/or integration of video feedback to provide additional motion feedback. It is also noted that quantification of requirements of noise resistance might inform additional/changed configurations. Motion estimation may become simpler through the reduction of inputs, or potentially more complicated through the introduction of instrumented axes or sensors on multiple parts of the c-arm (as examples) in order to properly analyze vibration. Regardless of the implementation, IMU(s) may provide continuous 9-axis motion feedback of the x-ray tube and detector positions.
[0057] In some examples, data from the IMU(s) may be provided through an application programming interface (API) or directly through DICOMs. For instance, the API may allow a user to query what the position was at a particular time. The DICOM may contain headers related to the position of the acquisition. In examples, the API and DICOM data together may enable a motion estimation system to execute for 3D reconstruction.
[0058] Some cone-beam computed tomography (CBCT) acquisition methods use the Feldkamp- Davis-Kress (FDK) algorithm and subsequent elaborations for image reconstruction. The algorithm may be particularly efficient in converting two-dimensional x-ray projections into three- dimensional images. CBCT systems employing the FDK algorithm typically require a minimum of 180-degree arcs (plus a detector half- width), and do so using a specialized, very rigid, and motorized C arm. These CBCT-capable mobile C arms are expensive and difficult to manage in operating rooms.
[0059] Manual Tomography acquires 2D X-ray projections by manually rotating the C arm, or any appropriate mobile/movable imaging device, through one or more arcs. This method acquires fluoroscopic images by employing a range of algebraic reconstruction techniques (ART). These methods can be utilized in scenarios that require higher image quality or that are limited in available angle projections. These iterative methods can improve image quality by refining the reconstruction through multiple iterations. Meanwhile, the minimum needed limited-angle projection for ART can vary depending on the specific application and desired image quality. Generally, a range of 90° to 180° is often sufficient to achieve a desired balance between image quality and computational efficiency for an individual planar arc. However, using multiple non-coplanar arcs of less than 90 or 180 degrees can provide good image quality and computational efficiency, and is a novel aspect of the devices and methods disclosed herein.
[0060] Most of the installed base of mobile C arms are at or near their end of life and feature analog imaging components such as image intensifiers and Cathode Ray Tube (CRT) monitors. Aspects described herein can replace the analog imaging chain components with a digital flat-panel detector, modem flat-panel display device, high-resolution touch-screen monitors, and a computer system to perform desired processing. Moreover, the aspects described herein include instrumentation to track the position of the C arm during image acquisition to facilitate ART image reconstruction. Thus, the C arm, or any appropriate mobile imaging device, can be manually rotated around a specific anatomical region, where images captured along, e.g., two orthogonal 45-degree arcs (Cranial- Caudal and Lateral), can be obtained to facilitate 3D acquisition of small volumes, such as a patient elbow, knee, or other anatomy.
[0061] The implications for using manual tomography in the surgical, clinical, and/or procedural settings are significant regarding workflow improvements and precision.
[0062] FIGs. 4-8 present example conceptual workflows to help illustrate aspects described herein, for instance kinematic evaluation of joints and other aspects. FIG. 4, for instance, depicts an example conceptual workflow for manual tomography in accordance with aspects described herein. The concept of ‘manual tomography’, or ‘arbitrary tomography’ referred to herein is provided and used to create a three-dimensional reconstruction of patient anatomy. The example workflows include a combination of physical events, software steps referring to processing performed by one or more computer system(s) such as those of or in communication with a C-arm or apparatus having imaging device(s), x-ray events, and procedural information. In examples, processing of the workflow is performed by a computer system onboard the imaging system/C-arm apparatus and/or remote computer system(s), such as those outside of the clinical or surgical environment or in the cloud, as examples.
[0063] In the example of FIG. 4, a patient sits on a table, stands, or is otherwise positioned in any desired examination position, with the patient anatomy being fixed during the imaging and registration (412). An operator or technician manually moves the imaging device (e.g., by way of rotating C-arm, for instance) (410) in trajectories, whether arbitrary or planned, to acquire a bundle of X-ray projections as ‘Live fluoroscopy’ (414). In an embodiment, the manual displacement of the imaging device (e.g., by C-arm movement) by the operator and the acquisition of X-ray images can be repeated one or more instances, reflected by the loop between live fluoroscopy 414 and the technician’s manual adjustment 410. The live fluoroscopy (414) can provide fluoroscopic snapshot(s) that provide a last-image hold (LIH) 416, shown on a display. The display could be a display viewed by the operator or another display. The live fluoroscopy facilitates reconstruction of a three-dimensional patient anatomy. Accordingly, the system performs cone beam computed tomography (CBCT) calculations to generate a 3D reconstruction of the patient anatomy (418), and presents the determined 3D model to a 3D model viewer 420. Such viewer could be provided as software that displays the model in an interface of a display. Additionally, this may be presented in conjunction with the LIH (416) to display the 3D model on an interface also with fluoroscopic image(s). In example, one overlays the other, optionally with transparency applied so as to view the anatomy or other imaged objects together with the models thereof and registered thereto. In some examples, the system performs automatic landmarking and reference system embedding of the patient anatomy (422), for example of the patient bone. The automatic landmarking and reference system embedding can be incorporated into the 3D model viewer (424) for viewing with the 3D model(s). An annotated 3D model of the patient anatomy with reference geometries can thereby be generated and provided. It can also be saved into electronic medical records (EMRs) and picture archiving and communication system (PACS) components (426).
[0064] In an embodiment, the system can provide audio, visual, and/or other forms of cues to guide scanning, for instance to commence, resume, halt, etc. scan activities. This can be provided so as to guide the operator in acquiring images at desired approximate angular increments, positions, or the like until a desired arc of motion or any other positional requirement(s) have been reached. Additionally, or alternatively, this can be repeated for additional arcs/positional requirements as desired or necessary. Example guidance can guide a user to add a cranial/caudal arc to a transverse arc, for instance. In any case, the system can then compute a three-dimensional reconstruction of the imaged anatomic volume (and optionally any other desired, imaged objects), and give orthogonal views (e.g., three) of slices and a three-dimensional rotatable/sliceable view of the three- dimensional anatomy (or other object(s)) based upon point density maps, for instance.
[0065] In another embodiment, and as is conceptually illustrated by the example workflow of FIG. 5, the concept of kinematic tomography is employed to project a three-dimensional reconstruction on a sequence of images to determine where a physical object (e.g., bone, patient anatomy, implant, etc.) is in space, and calculate the joint motion of moving patient anatomy. [0066] The workflow of FTG. 5 incorporates aspects discussed above with reference to FIG. 4, namely as elements 510, 512, 514, 516, 518, 520, 522 and 524 of FIG. 5.
[0067] The workflow of FIG. 5 outputs 3D model(s) and reference geometry (526), similar to the workflow of FIG. 4. The workflow of FIG. 5 continues, however, with the patient performing functional motion (e.g., a physical activity) (528) while the joint is dynamically imaged (528). This provides additional live fluoroscopy (530), with the system recording sequential fluoroscopic frames/images (538), providing the live fluoroscopy on the display (532). Meanwhile, processing provides model-image registration calculations for relevant bones and/or other object(s) to determine their kinematics (536). The kinematics results can be assembled and integrated (534) with the live fluoroscopy 532 on the display to provide animations and graphs displaying the determined joint kinematics. For example, using the determined kinematic data, three-dimensional animations are generated of the motion of the anatomic region of interest. Once a three-dimensional model is generated, then if patient anatomy is repositioned, a determination of where the three- dimensional model is in space and how it is oriented can be performed with only one snapshot. The 3D model(s), annotations, reference geometry and kinematics can be saved to PACS or EMRs (540) if desired for recording this data.
[0068] By way of a specific example, a five-second horizontal sweeping pulsed fluoroscopic scan can be taken of a weightbearing knee joint of a patient. Patient anatomy and/or implants can then be imaged using pulsed C-arm fluoroscopy as explained herein while the patient stands, kneels, squats, or performs gait motion activities such as climbing stairs or sitting in a chair. Fully autonomous software can use a limited-arc cone beam reconstruction method to create three- dimensional models of the femur and tibia/fibula bones, with or without implants, and then perform model-image registration to quantify the three-dimensional knee kinematics with a practical level of accuracy.
[0069] The example workflow illustrated in FIG. 5 can be accomplished by an individual radiology technician and appropriate equipment in 5-10 minutes, and does not require additional equipment for gait motion activity beyond a stair or chair (as examples). Image analysis can be performed by a computer system of (e.g., onboard) or in communication with the imaging device (e.g., an upgraded C-arm). In examples, the image data is provided in real-time or near real time to a cloud system for calculation/analysis, the results of which may be transmitted back to the imaging apparatus and/or used in an interface for user interaction.
[0070] Weightbearing kinematics affect knee function pre- and post-TKA. Presented herein is an approach that leverages imaging hardware and software to implement an efficient examination protocol for accurately assessing 3D knee kinematics. This enables, for example, dynamic 3D knee kinematics as a component of the routine clinical workup for patients with diseased or replaced knees.
[0071] FIG. 6 depicts an example conceptual workflow for fluoroscopy -guided navigation for glenoid implant alignment during total shoulder arthroplasty in accordance with aspects described herein. More specifically, FIG. 6 provides fluoroscopy-only navigation for glenoid drilling in a total shoulder arthroplasty procedure. In the example workflow of FIG. 6, a patient is positioned (on a table for instance), the joint dissection is completed, the humeral head is removed, and the fluoroscopic shot is aligned for an anterior-posterior (AP) view (612). The surgeon places a k-wire at the patient glenoid (614) and commences live fluoroscopy (616) by taking a fluoroscopic exposure of the joint. An L1H image is then presented on a display (618). The workflow then optionally performs aspect 619 for scapular pose estimation, in which an operator manually identifies a plurality of anatomic landmarks on the LIH image (620), the LIH image and identified landmarks are presented on the display (622), and the system generates an initial 3D anatomic pose estimate of the scapula (624) based on the landmarks and leveraging procedural information (608) including pre-operative projection geometry (628) and/or a pre-operative bone model and drill plan (630). The 3D anatomic model is presented on the display (626) with the LIH. Additionally, the workflow can use the pre-operative projection geometry (628) and/or pre-operative bone model and drill plan (630) to produce a refined 3D anatomic pose using model-image registration (632). The LIH image, 3D model, and drill plan presented on the display (634). Once the refined model is generated using model-image registration (632), a second fluoroscopic exposure is taken (636) and presented on the display with the 3D anatomic model and drill plan (638). Using the data presented on the display, the surgeon or operator can then adjust the drill path to match the drill plan (640).
[0072] The system is equipped to superimpose the direction in which the surgeon should drill a Kirschner wire (k-wire) based on the C-arm from at least two different positions/orientations. Thus, the steps can be repeated, wherein the imaging device is repositioned to a new, second position (642), and a fluoroscopic exposure can be taken from the second position. No calibration or attached hardware is required to perform the example workflow according to this embodiment.
[0073] Another embodiment of the disclosure, and as is conceptually illustrated by FIG. 7, depicts a workflow for fluoroscopy-guided navigation and C-arm displacement sensing for glenoid implant alignment during total shoulder arthroplasty, in accordance with aspects described herein, and more specifically for navigation of glenoid drilling in total shoulder arthroplasty. A patient is positioned (on a table for instance), the joint dissection is completed, the humeral head is removed, and the fluoroscopic shot is aligned for an anterior-posterior (AP) view (714). An optional manual tomography (711) is performed, as described herein, to obtain the three-dimensional geometry of a patient’s anatomy, generating bone model(s) and a surgical plan intraoperatively. Specifically, by 712 and 716 an operator or technician manually moves the imaging device (e.g., by way of C-arm movement) in somewhat arbitrary trajectories and acquires a bundle of X-ray projections to permit reconstruction of the three-dimensional anatomy. The manual displacement of the imaging device by an operator and the acquisition of X-ray images can be repeated in one or more instances, taking fluoroscopic exposure(s) 716 A last-image hold (LIH) image is presented on a display (718) for convenience. The system performs CBCT calculations to generate a 3D reconstruction of the patient anatomy (720). These calculations arc utilized to generate a 3D anatomic model, which is presented on the display with the L1H image (722). Interactive on-screen planning (724) can provide procedural information (708) such as pre-operative projection geometry (730) and/or pre-operative bone model and drill plan information (728). The system can generate a drill plan. The LIH image, 3D model, and drill plan can be presented on the display (726). Aspect 711 can be optionally used to generate procedural input(s) 708 as shown. In any case, the pre-operative projection geometry (730) and pre-operative bone model and drill plan (728) are used to produce a refined 3D anatomic anatomical (e.g., scapular) pose using model-image registration and imaging device positioning information (732). The LIH image, refined model and drill plan are presented on the display (734). In this embodiment, a pre-operative scan is not required. Using the refined model and drill plan, a surgeon or operator may make an initial placement of a k-wire (736). A fluoroscopic exposure is taken (738) and the resulting fluoroscopic image is presented on the display with the refined 3D model and drill plan (740). Based on the information presented on the display, the surgeon can adjust the drill pose, positioning, properties, etc. to match the drill plan (742). Additionally, the surgeon or operator can rotate the imaging device (i.e., by rotating the C-arm) (744), with the rotation being measured and serving as fccdback/input (708) for model-image registration performed at 732. Thus, autonomous model/image registration is possible based on the enabled CBCT in the workflow of FIG. 7.
[0074] Benefits of the embodiment illustrated in FIG. 7 may include, but are not limited to, providing an inexpensive augmentation of a basic fluoroscopy setup, providing a rotate and point- and-shoot procedure with no required setup, no required calibration, and no required attached additional hardware.
[0075] In yet another embodiment of the present disclosure, and as is conceptually illustrated by FIG. 8, fluoroscopy, C-arm displacement sensing, and LiDAR or optical modalities are for surgical navigation. By way of non-limiting example, FIG. 8 illustrates an example application for navigating glenoid drilling in total shoulder arthroplasty. In the example workflow of FIG. 8, manual tomography is performed using an optical system that is co-registered with an X-ray system, both of which share the same coordinate systems. Surgical registration steps that require preoperative imaging (CT for instance) are not needed.
[0076] In the illustrated example of FIG. 8, the patient is positioned (on a table for instance), the joint dissection is completed, the humeral head is removed, and the fluoroscopic shot is aligned for an anterior-posterior (AP) view (812) The surgeon or operator places 3D fiducial arrays on the patient anatomy and/or drill (814). A fluoroscopic exposure (816) is taken to obtain a last-image hold, which is presented on a display (818). The surgeon or operator can rotate the imaging device (i.e., by rotating the C-arm) (820) and repeat the fluoroscopic exposure as needed to acquire images and perform CBCT calculations to generate a 3D reconstruction of the patient anatomy, which is co-registered with the fiducial array (822). These calculations, in conjunction with the co-registered fiducial array, are used to generate a 3D anatomic model, which is presented with the LIH image on the display (824). Interactive on-screen planning (826) can provide procedural information (808) including pre-operative projection and navigation geometry (830) and a pre-operative bone model and drill plan (828). The LIH image, 3D model, and drill plan are presented on the display (832), and the interactive planning enables the surgeon to make an initial placement of a k-wire (834). The placement of the k-wire, and the procedure information 808 including the pre-operative projection and navigation geometry (830) and pre-operative bone model and drill plan (828) provide full 3D navigation of the drill and scapula using the LIDAR/optical array (836). The LIH image, 3D model, drill plan, and navigation feedback can be presented on the display (838). Using the navigation feedback, the surgeon can adjust the drill pose, positioning, properties, etc. to match the drill plan (840).
[0077] As illustrated in FIG. 8 and described herein, optical markers are used to obtain spatial information of the patient anatomy and utilized when registering the model. Marker arrays can be augmented so that they are easily to identify when conducting manual tomography. Using optical guidance three-dimensional technology, marker arrays can be affixed to the target anatomy for manual tomography, and therefore the markers do not have to be registered in space. Manual tomography is performed, X-ray/optical markers are presented on the surgical display, and the system builds a three-dimensional model of a patient’s anatomy. The surgeon can manipulate the joint to flex the knee, which movement/repositioning will be reflected on the display to show the flexed joint. When three-dimensional reconstruction is performed, landmarking discriminates between the markers and the anatomy.
[0078] It is also contemplated that a LiDAR system can be used in conjunction with an X-ray system. This would mean markers are not needed to obtain spatial information of the patient anatomy; landmarks may be patient anatomy, for example, bone.
[0079] This three-dimensional surgical navigation approach can generate anatomic model(s) and a surgical plan intraoperatively. Radiation may be used relatively sparingly to develop the bone model and plan. Moreover, an operator can track bone movement in real-time using LiDAR optical tracking/motion capture without additional radiation.
[0080] FIG. 9 depicts an example representation of relationships between and among aspects described herein for performing imaging tasks based on manual movement of a radiographic imaging system. Arrows between a pair of items in FIG. 9 represent flow, communication of data sharing, and/or other interaction between the items. Example ‘inputs’ 902 include patient anatomy, EMR and PACS information, physician orders, and application (‘app’)/module development by app developers (which could be a provider of the system/processes discussed herein and/or third-party app developers). Software 904 provides an application interface which can include elements/modules from an application store. Imaging hardware 906 includes a clinical imaging system, which could be or include a C-arm or U-arm system as examples, and a surgical imaging system. The systems exchange images, plans, and any other relevant information to support processes and functions performed by the systems. Various image, measure, and procedure record data 908 includes data of radiographs, fluoroscopic images and 3D CBCT, guided injections, kinematic studies, and optically guided 3D surgical procedures. Software 910, which could be the same or different as software 904, includes an application interface, which could be the same or different from the application interface of software 904, which provides outputs 912, for instance EMR and PACS outputs, and usage fees/billing information.
[0081] Imaging and procedural aspects described herein for use in surgical applications can be provided by hardware, software, or a combination of the two. Example systems and methods can use cone-beam CT and machine learning (ML)-based methods to define a three-dimensional model of patient bone(s) and/or other objects such as implants or surgical instruments from a sweep of fluoroscopic images about the patient anatomy. As reflected by FIG. 9, inputs to a surgical system can include information provided by the patient, physician orders, electronic medical records (EMRs), and picture archiving and communication system (PACS) information. Blur free images of joints in motion can be taken using X-ray pulses. X-ray pulses can be any appropriate duration, for instance 10-15 milliseconds in duration. The surgical system can be equipped with, e.g., a 15 kilowatt (kW) pulsed generator/tube, and fluoroscopic modes including, but not limited to, High Level Fluoroscopy (HLF) and Spot fluoroscopy.
[0082] In one embodiment, sensors used in a surgical or clinical system may include a detector such as a flat panel detector (FPD). Flat-panel detectors facilitate calibration and do not require distortion compensation. A face of a flat panel detector can have a dimension, for example, of 10”xl2”, 12”xl2”, or 17” x 17”, or any other desired dimension. Larger detectors can capture more natural motions and make observations much easier to capture. Furthermore, providing a relatively large digital detector without pin-cushion distortions improves measurement accuracy. Joint encoders can be used in conjunction with an inertial measurement unit (IMU) as a sensor in an embodiment of the surgical system. Sensors used in the surgical or clinical system can further include a live, bore sight video camera and docking optical navigation. The live video camera can be used for visual overlay navigation, in examples. Additionally, surgical navigation tracking sensors in 6 dcgrccs-of-frccdom for multiple items (C-arm, instruments, patient) can be provided.
[0083] An example surgical or clinical system is equipped with computational resources, for instance processing circuit(s), such as central processing unit(s) (CPUs) and graphics processing unit(s) (GPUs), memory, and program instructions for execution by the processing circuit(s) to perform aspects described herein. GPUs can, for instance, be provided to offload certain processing tasks from the CPU(s) to the GPU(s), for instance to process specialized applications to perform various surgical tasks, including surgical navigation based on artificial intelligence (AI)/ML- enabled three-dimensional bone registration. The onboard CPU(s)/GPU(s) can complete the desired analysis before uploading to a PACS, or another robust cloud computing connection. The system can be equipped with networking hardware for communication with external components, such as other systems on a local area network or over the internet (for cloud computing access, for example). The system may be equipped to use ML and numerical methods to autonomously perform 3D model-image registration to measure kinematics, in embodiments.
[0084] In embodiments, the surgical or clinical system includes multiple different functionalities. Functionalities of the system include, but are not limited to, fluoroscopy, video guided fluoroscopy, manual CBCT, dynamic motion acquisition, three-dimensional registered model/image, imagebased optical navigation for surgical guidance, and generation of spot images. Other functionalities of the surgical or clinical hardware system could also include Optical Motion Capture and/or LiDAR.
[0085] The surgical or clinical system is also capable of generating outputs, including, for instance, fluoroscopic images, still images, spot images, anatomic volumes, annotated three-dimensional models of patient anatomy, procedural records and measurements. Measurements can, for example, confer geometric or kinematic information relating to a patient’s anatomy and/or an implant.
[0086] In one embodiment, the surgical or clinical system for imaging includes a C-arm and a monitor cart. The system is plugged into a power source (i.e., electrical outlet) and/or obtains power from a battery to operate. Additionally, an optical navigation kit can be installed in a drawer on the monitor cart, to be physically attached to the c-arm, and ready to be used under battery power and Bluetooth communication for procedures requiring this instrumentation. The system can further include medical displays that provide, for instance, overlay of fluoroscopic outputs, live video, targeting graphics, and navigation screens as described herein.
[0087] In an embodiment, individual navigation and image analysis programs can be downloaded and run on the surgical or clinical software platform.
[0088] Imaging and procedural aspects described herein for use in clinical applications can be provided by hardware, software, or a combination of the two. Example systems and methods can use cone-beam CT and machine learning (ML)-based methods to define a three-dimensional model of patient bone(s) and/or other objects such as implants or surgical instruments from a sweep of fluoroscopic images about the patient anatomy. As shown in FIG. 9, inputs to the clinical system can comprise information provided by the patient, physician orders, electronic medical records (EMRs), and picture archiving and communication system (PACS). Blur free images of joints in motion can be taken using X-ray pulses. X-ray pulses can be any appropriate duration, for instance 5 milliseconds (ms) in duration. The clinical system can be equipped with, e.g., a 4 kilowatt (kW) pulsed generator/tube, and fluoroscopic modes including but not limited to, High Level Fluoroscopy (HLF) and Spot fluoroscopy.
[0089] Though conventional systems and approaches exist in which fluoroscopic C-arms use motors and control systems to obtain planar/circular image scans for the purpose of reconstructing 3D anatomy, such conventional systcms/approachcs fail to enable cone-beam acquisitions accomplished with manual control of the imaging platform. Additionally, conventional add-on equipment exists that connects via wires (video cables) to a traditional C-arm and then performs model-image registration or a computerized surgical navigation function. However, such conventional equipment fails to provide surgical navigation and other advanced imaging functions that can be accomplished directly and natively on the image-acquisition system.
[0090] FIG. 10 depicts an example process for anatomy reconstruction and related tasks based on manual movement of an imaging device, in accordance with aspects described herein. The process could be performed by one or more computer systems, such as those described herein.
[0091] Referring to FIG. 10, the process includes sequentially capturing (1002) a plurality of radiographic images of patient anatomy, for instance based on manual movement of an imaging device. In examples, the patient anatomy includes bones of a joint of a patient. The manual movement provides a plurality of different radiographic projections that correspond to the capture of the plurality of radiographic images. In examples, the manual movement is dynamically selected and performed by an operator without/absent predefinition of the radiographic projections and without predefinition/absent of positioning of the imaging device about the patient anatomy. In this manner, the positions may be partly or wholly arbitrarily decided/determined by the operator. In any case, the sequentially capturing includes monitoring the movement of the imaging device and recording positional information of the imaging device for each of the plurality of different radiographic projections and corresponding capture of the plurality of radiographic images.
[0092] The process continues by reconstructing (1004) the patient anatomy based on the captured plurality of radiographic images and the recorded positional information. The reconstructing can provide a three-dimensional (3D) model of the patient anatomy. The process additionally registers (1006)) the 3D model of the patient anatomy to the patient anatomy as reflected in the plurality of radiographic images, then builds and outputs (1008) an interface that includes the 3D model of the patient anatomy registered to the patient anatomy reflected in at least one radiographic image. The at least one radiographic image is (i) a radiographic image of the plurality of radiographic images or (ii) a radiographic image obtained subsequent to the sequentially-capturing. In examples where the building/outputting depicts the 3D model registered to multiple images (for instance as a progression of images as in an animation, video, or the like), the multiple images could include images of the captured plurality of radiographic images, images obtained/captured subsequent to capturing the plurality of images, or a combination of both.
[0093] The process could halt based on building and outputting the interface. In some examples, the process returns to 1002 to capture additional image(s) - optionally just one or a collection, and optionally based on further movement of the imaging device or while the imaging device remains stationary. In this manner, the process can iterate one or more times. Aspects of the reconstructing, registering, and building/outputting when repeated could be performed differently, for instance building off of prior iterations thereof, as appropriate.
[0094] In one example in which aspects of FIG. 10 are iterated, the process returns to 1002 to sequentially capture additional radiographic images of the patient anatomy as the patient performs physical movement. The additional radiographic images can therefore reflect anatomical movement of the patient anatomy based on the physical movement. The process can register the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images, and update the interface to provide an animated (for instance as a video or other sequence of images) representation of the anatomical movement. In yet further embodiments, the process can determine kinematic properties of the patient anatomy based on registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images and on the anatomical movement as reflected by the additional radiographic images, and update the interface to indicate the determined kinematic properties.
[0095] In some examples, the building of 1008 includes in the interface a plan, registered to the patient anatomy as reflected in the plurality of radiographic images, for surgical tool interaction with the patient anatomy. Additionally, in another example in which aspects of FIG. 10 are iterated, the process returns to 1002 to sequentially capturing additional radiographic image(s) as a user introduces the surgical tool into a field of view of the imaging device such that the surgical tool is reflected in the additional radiographic images, and updates the interface to identify a position of the surgical tool in relation to the plan, to facilitate a desired positioning of the surgical tool for performing the surgical tool interaction with the patient anatomy.
[0096] In some embodiments, the process optionally includes performing at least one of (i) annotating the 3D model of the patient anatomy with reference geometry or (ii) landmarking the patient anatomy as reflected in the plurality of radiographic images, and the building at 1008 includes in the interface annotations based on the annotating and/or landmarks based on the landmarking as the case may be.
[0097] Processes described herein may be performed singly or collectively by one or more computer systems. Such computer systems may be provided as part of a C-arm apparatus or other imaging system described herein, or may be in communication with such a system, as examples. FIG. 11 depicts one example of such a computer system and associated devices to incorporate and/or use aspects described herein. A computer system may also be referred to herein as a data processing device/system, computing device/system/node, or simply a computer. The computer system may be based on one or more of various system architectures and/or instruction set architectures, such as those offered by Intel Corporation (Santa Clara, California, USA) or ARM Holdings pic (Cambridge, England, United Kingdom), as examples.
[0098] FIG. 11 shows a computer system 1100 in communication with external device(s) 1112. Computer system 1100 includes one or more processor(s) 1102, which are processing circuit(s) such as central processing unit(s) (CPUs), graphics processing unit(s) (GPUs), and/or other types of processors. A processor can include functional components used in the execution of instructions, such as functional components to fetch program instructions from locations such as cache or main memory, decode program instructions, and execute program instructions, access memory for instruction execution, and write results of the executed instructions. A processor 1102 can also include register(s) to be used by one or more of the functional components. The processors of the computer system, whether in the form of CPUs, GPUs, and/or other types of processors, can be arranged and/or leveraged in any of various ways to facilitate high-performance. For instance, certain processing tasks can be offloaded from the CPU(s) to GPU(s) for processing. Additionally or alternatively, processors, whether CPUs, GPUs, or other types, can be arranged for parallel/concurrent processing in some embodiments. In a specific example, processing tasks are offloaded to a group of GPUs for parallel execution on the GPUs of the group. It is also possible for a group of CPUs (which themselves might have multiple cores each) to execute various tasks in parallel.
[0099] Computer system 1100 also includes memory 1104, input/output (I/O) devices 1108, and I/O interfaces 1110, which may be coupled to processor(s) 1102 and each other via one or more buses and/or other connections. Bus connections represent one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include the Industry Standard Architecture (ISA), the Micro Channel Architecture (MCA), the Enhanced ISA (EISA), the Video Electronics Standards Association (VESA) local bus, and the Peripheral Component Interconnect (PCI).
[0100] Memory 1104 can be or include main or system memory (e.g. Random Access Memory) used in the execution of program instructions, storage device(s) such as hard drive(s), flash media, or optical media as examples, and/or cache memory, as examples. Memory 1104 can include, for instance, a cache, such as a shared cache, which may he coupled to local caches (examples include LI cache, L2 cache, etc.) of proccssor(s) 1102. Additionally, memory 1104 may be or include at least one computer program product having a set (e.g., at least one) of program modules, instructions, code or the like that is/are configured to carry out functions of embodiments described herein when executed by one or more processors.
[0101] Memory 1104 can store an operating system 1105 and other computer programs 1106, such as one or more computer programs/applications that execute to perform aspects described herein. Specifically, programs/applications can include computer readable program instructions that may be configured to carry out functions of embodiments of aspects described herein.
[0102] Examples of I/O devices 1108 include but are not limited to microphones, speakers, Global Positioning System (GPS) devices, cameras, graphics cards or GPUs, imaging devices, detector devices, lights, accelerometers, gyroscopes, magnetometers, sensor devices configured to sense light, proximity, heart rate, body and/or ambient temperature, blood pressure, and/or skin resistance, and activity monitors. An I/O device may be incorporated into the computer system as shown, though in some embodiments an I/O device may be regarded as an external device (1112) coupled to the computer system through one or more I/O interfaces 1110.
[0103] Computer system 1100 may communicate with one or more external devices 1 112 via one or more I/O interfaces 1110. Example external devices include a keyboard, a pointing device, a display, and/or any other devices that enable a user to interact with computer system 1100. Other example external devices include any device that enables computer system 1100 to communicate with one or more other computing systems or peripheral devices such as a printer. A network interface/adapter is an example I/O interface that enables computer system 1100 to communicate with one or more networks, such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet), providing communication with other computing devices or systems, storage devices, or the like. Ethernet-based (such as Wi-Fi) interfaces and Bluetooth® adapters are just examples of the currently available types of network adapters used in computer systems (BLUETOOTH is a registered trademark of Bluetooth SIG, Inc., Kirkland, Washington, U.S.A.). [0104] The communication between I/O interfaces 1110 and external devices 1112 can occur across wired and/or wireless communications link(s) 1111, such as Ethernet-based wired or wireless connections. Example wireless connections include cellular, Wi-Fi, Bluetooth®, proximity-based, near-field, or other types of wireless connections. More generally, communications link(s) 1111 may be any appropriate wireless and/or wired communication link(s) for communicating data.
[0105] Particular external device(s) 1112 may include one or more data storage devices, which may store one or more programs, one or more computer readable program instructions, and/or data, etc. Computer system 1100 may include and/or be coupled to and in communication with (e.g. as an external device of the computer system) removable/non-removable, volatile/non- volatile computer system storage media. For example, it may include and/or be coupled to a non-removable, nonvolatile magnetic media (typically called a "hard drive"), a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk"), and/or an optical disk drive for reading from or writing to a removable, non-volatile optical disk, such as a CD-ROM, DVD-ROM or other optical media.
[0106] Computer system 1100 may be operational with numerous other general purpose or special purpose computing system environments or configurations. Computer system 1100 may take any of various forms, well-known examples of which include, but arc not limited to, personal computer (PC) system(s), server computer system(s), such as messaging server(s), thin client(s), thick client(s), workstation(s), laptop(s), handheld device(s), mobile device(s)/computer(s) such as smartphone(s), tablet(s), and wearable device(s), multiprocessor system(s), microprocessor-based system(s), telephony device(s), network appliance(s) (such as edge appliance(s)), virtualization device(s), storage controller(s), set top box(es), programmable consumer electronic(s), network PC(s), minicomputer system(s), mainframe computer system(s), and distributed cloud computing environment(s) that include any of the above systems or devices, and the like.
[0107] Aspects of the present disclosure may be a system, a method, and/or a computer program product, any of which may be configured to perform or facilitate aspects described herein.
[0108] In some embodiments, aspects of the present disclosure may take the form of a computer program product, which may be embodied as computer readable medium(s). A computer readable medium may be a tangible storage device/medium having computer readable program code/in struct! on s stored thereon. Example computer readable medium(s) include, but are not limited to, electronic, magnetic, optical, or semiconductor storage devices or systems, or any combination of the foregoing. Example embodiments of a computer readable medium include a hard drive or other mass- storage device, an electrical connection having wires, random access memory (RAM), read-only memory (ROM), erasable-programmable read-only memory such as EPROM or flash memory, an optical fiber, a portable computer disk/diskette, such as a compact disc read-only memory (CD-ROM) or Digital Versatile Disc (DVD), an optical storage device, a magnetic storage device, or any combination of the foregoing. The computer readable medium may be readable by a processor, processing unit, or the like, to obtain data (e.g. instructions) from the medium for execution. In a particular- example, a computer program product is or includes one or more computer readable media that includes/stores computer readable program code to provide and facilitate one or more aspects described herein.
[0109] As noted, program instruction contained or stored in/on a computer readable medium can be obtained and executed by any of various suitable components such as a processor of a computer system to cause the computer system to behave and function in a particular manner. Such program instructions for canying out operations to perform, achieve, or facilitate aspects described herein may be written in, or compiled from code written in, any desired programming language. In some embodiments, such programming language includes object-oriented and/or procedural programming languages such as C, C++, C#, lava, etc.
[0110] Program code can include one or more program instructions obtained for execution by one or more processors. Computer program instructions may be provided to one or more processors of, e.g., one or more computer systems, to produce a machine, such that the program instructions, when executed by the one or more processors, perform, achieve, or facilitate aspects described herein, such as actions or functions described in flowcharts and/or block diagrams described herein. Thus, each block, or combinations of blocks, of the flowchart illustrations and/or block diagrams depicted and described herein can be implemented, in some embodiments, by computer program instructions.
[0111] Although various embodiments are described above, these are only examples. [0112] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” arc intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
[0113] The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below, if any, are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of one or more embodiments has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain various aspects and the practical application, and to enable others of ordinary skill in the art to understand various embodiments with various modifications as are suited to the particular use contemplated.

Claims

CLAIMS What is claimed is:
1. A computer- implemented method including: sequentially capturing a plurality of radiographic images of patient anatomy based on manual movement of an imaging device, the manual movement providing a plurality of different radiographic projections that correspond to the capture of the plurality of radiographic images, wherein the sequentially capturing includes monitoring the movement of the imaging device and recording positional information of the imaging device for each of the plurality of different radiographic projections and corresponding capture of the plurality of radiographic images; reconstructing the patient anatomy based on the captured plurality of radiographic images and the recorded positional information, the reconstructing providing a three- dimensional (3D) model of the patient anatomy; registering the 3D model of the patient anatomy to the patient anatomy as reflected in the plurality of radiographic images; and building and outputting an interface that includes the 3D model of the patient anatomy registered to the patient anatomy reflected in at least one radiographic image, the at least one radiographic image being a radiographic image of the plurality of radiographic images or a radiographic image obtained subsequent to the sequentially-capturing.
2. The method of claim 1, further including: sequentially capturing additional radiographic images of the patient anatomy as the patient performs physical movement, wherein the additional radiographic images reflect anatomical movement of the patient anatomy based on the physical movement; registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images; updating the interface to provide an animated representation of the anatomical movement.
3. The method of claim 2, further including: determining kinematic properties of the patient anatomy based on the registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images and on the anatomical movement as reflected by the additional radiographic images; and updating the interface to indicate the determined kinematic properties.
4. The method of claim 1, wherein the building includes in the interface a plan, registered to the patient anatomy as reflected in the plurality of radiographic images, for surgical tool interaction with the patient anatomy.
5. The method of claim 4, further including: sequentially capturing additional radiographic images as a user introduces the surgical tool into a field of view of the imaging device such that the surgical tool is reflected in the additional radiographic images; and updating the interface to identify a position of the surgical tool in relation to the plan, to facilitate a desired positioning of the surgical tool for performing the surgical tool interaction with the patient anatomy.
6. The method of claim 1, further including performing at least one of annotating the 3D model of the patient anatomy with reference geometry or landmarking the patient anatomy as reflected in the plurality of radiographic images, wherein the building includes in the interface annotations based on the annotating or landmarks based on the landmarking.
7. The method of claim 1, 2, 3, 4, 5 or 6, wherein the patient anatomy includes bones of a joint of a patient.
8. The method of claim 1 , wherein the manual movement is dynamically selected and performed by an operator without predefinition of the radiographic projections and without predefinition of positioning of the imaging device about the patient anatomy.
9. A system including: an imaging device and detector arranged in a fixed position relative to each other; an arm to which the imaging device and detector are attached, wherein movement of the arm repositions the imaging device and the detector in space and the imaging device and detector remain in the fixed position relative to each other; a display device; a memory; and a processing circuit in communication with the memory, wherein the system is configured to perform: sequentially capturing a plurality of radiographic images of patient anatomy based on manual movement of the arm to move the imaging device, the manual movement providing a plurality of different radiographic projections that correspond to the capture of the plurality of radiographic images, wherein the sequentially capturing includes monitoring the movement of the imaging device and recording positional information of the imaging device for each of the plurality of different radiographic projections and corresponding capture of the plurality of radiographic images; reconstructing the patient anatomy based on the captured plurality of radiographic images and the recorded positional information, the reconstructing providing a three-dimensional (3D) model of the patient anatomy; registering the 3D model of the patient anatomy to the patient anatomy as reflected in the plurality of radiographic images; and building and outputting an interface to the display device that includes the 3D model of the patient anatomy registered to the patient anatomy reflected in at least one radiographic image, the at least one radiographic image being a radiographic image of the plurality of radiographic images or a radiographic image obtained subsequent to the sequentially-capturing.
10. The system of claim 9, wherein the system is further configured to perform: sequentially capturing additional radiographic images of the patient anatomy as the patient performs physical movement, wherein the additional radiographic images reflect anatomical movement of the patient anatomy based on the physical movement; registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images; updating the interface to provide an animated representation of the anatomical movement.
11. The system of claim 10, wherein the system is further configured to perform: determining kinematic properties of the patient anatomy based on the registering the 3D model of patient anatomy to the patient anatomy as reflected in the additional radiographic images and on the anatomical movement as reflected by the additional radiographic images; and updating the interface to indicate the determined kinematic properties.
12 The system of claim 9, wherein the building includes in the interface a plan, registered to the patient anatomy as reflected in the plurality of radiographic images, for surgical tool interaction with the patient anatomy.
13. The system of claim 12, wherein the system is further configured to perform: sequentially capturing additional radiographic images as a user introduces the surgical tool into a field of view of the imaging device such that the surgical tool is reflected in the additional radiographic images; and updating the interface to identify a position of the surgical tool in relation to the plan, to facilitate a desired positioning of the surgical tool for performing the surgical tool interaction with the patient anatomy.
14. The system of claim 9, wherein the system is further configured to perform: performing at least one of annotating the 3D model of the patient anatomy with reference geometry or landmarking the patient anatomy as reflected in the plurality of radiographic images, wherein the building includes in the interface annotations based on the annotating or landmarks based on the landmarking.
15. The system of claim 9, 10, 11, 12, 13, or 14, wherein the patient anatomy includes bones of a joint of a patient.
16. The system of claim 1, wherein the manual movement is dynamically selected and performed by an operator absent predefinition of the radiographic projections and absent predefinition of positioning of the imaging device about the patient anatomy.
PCT/US2025/018516 2024-03-06 2025-03-05 Anatomy reconstruction and imaging tasks based on manual movement of imaging devices Pending WO2025188860A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202463562025P 2024-03-06 2024-03-06
US63/562,025 2024-03-06
US202463688357P 2024-08-29 2024-08-29
US63/688,357 2024-08-29

Publications (2)

Publication Number Publication Date
WO2025188860A1 true WO2025188860A1 (en) 2025-09-12
WO2025188860A8 WO2025188860A8 (en) 2025-10-02

Family

ID=96991481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/018516 Pending WO2025188860A1 (en) 2024-03-06 2025-03-05 Anatomy reconstruction and imaging tasks based on manual movement of imaging devices

Country Status (1)

Country Link
WO (1) WO2025188860A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160191887A1 (en) * 2014-12-30 2016-06-30 Carlos Quiles Casas Image-guided surgery with surface reconstruction and augmented reality visualization
US20180070902A1 (en) * 2016-09-14 2018-03-15 Carestream Health, Inc. Apparatus and method for 4d x-ray imaging
US20220079675A1 (en) * 2018-11-16 2022-03-17 Philipp K. Lang Augmented Reality Guidance for Surgical Procedures with Adjustment of Scale, Convergence and Focal Plane or Focal Point of Virtual Data
US20220108468A1 (en) * 2018-09-10 2022-04-07 The University Of Tokyo Method and system for obtaining joint positions, and method and system for motion capture
US20230355347A1 (en) * 2016-09-09 2023-11-09 Mobius Imaging, Llc Methods And Systems For Display Of Patient Data In Computer-Assisted Surgery
US20230404501A1 (en) * 2021-04-09 2023-12-21 Pulmera, Inc. Medical imaging systems and associated devices and methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160191887A1 (en) * 2014-12-30 2016-06-30 Carlos Quiles Casas Image-guided surgery with surface reconstruction and augmented reality visualization
US20230355347A1 (en) * 2016-09-09 2023-11-09 Mobius Imaging, Llc Methods And Systems For Display Of Patient Data In Computer-Assisted Surgery
US20180070902A1 (en) * 2016-09-14 2018-03-15 Carestream Health, Inc. Apparatus and method for 4d x-ray imaging
US20220108468A1 (en) * 2018-09-10 2022-04-07 The University Of Tokyo Method and system for obtaining joint positions, and method and system for motion capture
US20220079675A1 (en) * 2018-11-16 2022-03-17 Philipp K. Lang Augmented Reality Guidance for Surgical Procedures with Adjustment of Scale, Convergence and Focal Plane or Focal Point of Virtual Data
US20230404501A1 (en) * 2021-04-09 2023-12-21 Pulmera, Inc. Medical imaging systems and associated devices and methods

Also Published As

Publication number Publication date
WO2025188860A8 (en) 2025-10-02

Similar Documents

Publication Publication Date Title
EP3453330B1 (en) Virtual positioning image for use in imaging
Navab et al. First deployments of augmented reality in operating rooms
US10172574B2 (en) Interventional X-ray system with automatic iso-centering
KR101156306B1 (en) Method and apparatus for instrument tracking on a scrolling series of 2d fluoroscopic images
US20130266123A1 (en) X-ray diagnostic apparatus
CN104684483A (en) Rapid frame-rate wireless imaging
CN101524279A (en) Method and system for virtual roadmap imaging
CN115670650B (en) Method and system for dynamically annotating medical images
JP2009022754A (en) Method for correcting registration of radiography images
Spin-Neto et al. An ex vivo study of automated motion artefact correction and the impact on cone beam CT image quality and interpretability
CN112155727A (en) Surgical navigation systems, methods, devices, and media based on three-dimensional models
Vogt Real-Time Augmented Reality for Image-Guided Interventions
EP3824476A1 (en) Automatic setting of imaging parameters
Fotouhi et al. Co-localized augmented human and X-ray observers in collaborative surgical ecosystem
US20120057671A1 (en) Data acquisition and visualization mode for low dose intervention guidance in computed tomography
Navab et al. Camera-augmented mobile C-arm (CAMC) application: 3D reconstruction using a low-cost mobile C-arm
CN119074019A (en) Medical imaging method, device and system
EP4128145B1 (en) Combining angiographic information with fluoroscopic images
CN116744875A (en) Navigation support
JP7375182B2 (en) Tracking Inaccuracy Compensation
WO2025188860A1 (en) Anatomy reconstruction and imaging tasks based on manual movement of imaging devices
US20200168318A1 (en) System and method for remote visualization of medical images
WO2021020112A1 (en) Medical photography system and medical photography processing device
US12324691B2 (en) Cone beam computed tomography centering with augmented reality
JP6956514B2 (en) X-ray CT device and medical information management device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25768840

Country of ref document: EP

Kind code of ref document: A1