WO2024238179A1 - Prévisualisation multi-champ de vision d'image longue - Google Patents
Prévisualisation multi-champ de vision d'image longue Download PDFInfo
- Publication number
- WO2024238179A1 WO2024238179A1 PCT/US2024/027919 US2024027919W WO2024238179A1 WO 2024238179 A1 WO2024238179 A1 WO 2024238179A1 US 2024027919 W US2024027919 W US 2024027919W WO 2024238179 A1 WO2024238179 A1 WO 2024238179A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- localization
- images
- scan process
- patient anatomy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0037—Performing a preliminary scan, e.g. a prescan for identifying a region of interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
- A61B6/463—Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/467—Arrangements for interfacing with the operator or the patient characterised by special input means
- A61B6/469—Arrangements for interfacing with the operator or the patient characterised by special input means for selecting a region of interest [ROI]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/488—Diagnostic techniques involving pre-scan acquisition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5235—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
- A61B6/5241—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT combining overlapping images of the same imaging modality, e.g. by stitching
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5247—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/54—Control of apparatus or devices for radiation diagnosis
- A61B6/545—Control of apparatus or devices for radiation diagnosis involving automatic set-up of acquisition parameters
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/44—Constructional features of apparatus for radiation diagnosis
- A61B6/4429—Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
- A61B6/4435—Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure
- A61B6/4441—Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure the rigid structure being a C-arm or U-arm
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/44—Constructional features of apparatus for radiation diagnosis
- A61B6/4429—Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
- A61B6/4435—Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure
- A61B6/4447—Tiltable gantries
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5258—Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
- A61B6/5264—Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise due to motion
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5294—Devices using data or image processing specially adapted for radiation diagnosis involving using additional data, e.g. patient information, image labeling, acquisition parameters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/54—Control of apparatus or devices for radiation diagnosis
- A61B6/547—Control of apparatus or devices for radiation diagnosis involving tracking of position of the device or parts of the device
Definitions
- the present disclosure is generally directed to imaging, and relates more particularly to surgical imaging.
- Surgical robots may assist a surgeon or other medical provider in carrying out a surgical procedure, or may complete one or more surgical procedures autonomously. Imaging may be used by a medical provider for diagnostic and/or therapeutic purposes. Patient anatomy can change over time, particularly following placement of a medical implant in the patient anatomy.
- Example aspects of the present disclosure include:
- a system including: a processor; and a memory storing instructions thereon that, when executed by the processor, cause the processor to: perform a preview scan process associated with scanning a patient anatomy, wherein performing the preview scan process includes: capturing a first localization image based on a start position associated with the preview scan process; capturing a second localization image based on a stop position associated with the preview scan process; and capturing one or more intermediate localization images based on one or more intermediate positions different from the start position and the stop position; and perform a long scan process associated with scanning the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images.
- the first localization image includes at least a first portion of the patient anatomy
- the second localization image includes at least a second portion of the patient anatomy
- the one or more intermediate localization images include at least a third portion of the patient anatomy.
- identifying the target coordinates is in response to a user selection of at least one of: the target region included in the at least one localization image; and a target portion of the patient anatomy, wherein the target portion is included in the target region.
- identifying the target coordinates is in response to detecting a target feature included in the target region, wherein the target feature is associated with the patient anatomy.
- the target region included in the at least one localization image includes at least n nnrtinn nf th natient anatomy.
- the instructions executable by the processor to perform the long scan process are further executable by the processor to: capture a set of multidimensional images including the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images; and generate a long scan image including the patient anatomy based on merging data associated with the set of multidimensional images.
- any of the aspects herein, wherein the set of multidimensional images include an X-ray image, a computed tomography (CT) image, or a magnetic resonance imaging (MRI) image.
- CT computed tomography
- MRI magnetic resonance imaging
- the patient anatomy includes soft tissue.
- the first localization image, the second localization image, and the one or more intermediate localization images each include an X-ray image, a computed tomography (CT) image, a magnetic resonance imaging (MRI) image, an optical image, or a light detection and ranging (LiDAR) image.
- CT computed tomography
- MRI magnetic resonance imaging
- LiDAR light detection and ranging
- An imaging device a processor coupled with the imaging device; and memory coupled with the processor and storing instructions thereon that, when executed by the processor, enable the processor to: perform a preview scan process associated with scanning a patient anatomy, wherein performing the preview scan process includes: capturing a first localization image based on a start position associated with the preview scan process; capturing a second localization image based on a stop position associated with the preview scan process; and capturing one or more intermediate localization images based on one or more intermediate positions different from the start position and the stop position; and perform a long scan process associated with scanning the patient anatomy using the imaging device, wherein performing the long scan process is based on the first localization image, the second localization image, and the one or more intermediate localization images.
- the imaging device includes at least one of an O-arm and a C-arm.
- a method including: performing a preview scan process associated with scanning a patient anatomy, wherein performing the preview scan process includes: capturing a first localization image based on a start position associated with the preview scan process; capturing a second localization image based on a stop position associated with the preview scan process; and capturing one or more intermediate localization images based on one or more intermediate positions different form the start position and the stop position; and performing a long scan process associated with scanning the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images.
- any of the aspects herein further including: generating a multiple field of view representation of the patient anatomy in response to performing the preview scan process, wherein generating the multiple field of view representation includes providing synchronized navigation of the first localization image, the second localization image, and the one or more intermediate localization images.
- FIGs. 1 A through 1C illustrate examples of a system that support aspects of the present disclosure.
- Fig. 2A through 2F illustrates examples of localization images and long scan images supported by aspects of the present disclosure.
- FIG. 3 illustrates an example of a process flow in accordance with aspects of the present disclosure.
- FIG. 4 illustrates an example of a process flow in accordance with aspects of the present disclosure.
- the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions).
- Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
- data storage media e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- processors such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuit
- DSPs digital signal processors
- proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
- Some imaging systems may support a long scan process capable of generating a multidimensional “long scan” (also referred to herein as a “long film,” “long film image,” or “pseudo-panoramic image”) of a patient anatomy using an imaging device.
- a long scan may produce a long film based on multiple images captured by an imaging device, which may provide a relatively longer or wider image compared to an individual image captured by the imaging device.
- a user may set the length associated with the scan by locating and saving a start position and a stop position.
- the long scan may fail to image a deformity (e.g., curvature) of the spinal anatomy due to the deformity being absent from regions associated with the start and stop positions.
- a deformity e.g., curvature
- rnn «pniipnre associated with missing a section of the spine in the image may include a retaking of the long scan image of the spine, which would result in increased radiation exposure (e.g., radiation dose) to the patient due to the rescan and increased surgery time.
- radiation exposure e.g., radiation dose
- Some imaging systems may support capturing images of a patient anatomy using an imaging device.
- the workflow includes acquiring capturing 2D “scout images” (also referred to herein as “localization images”) prior to a subsequent process for capturing a 2D long film or 3D long scan.
- the O-Arm® imaging system provided by Medtronic Navigation, Inc. supports providing a field of view (FOV) preview feature (also referred to herein as a FOV preview representation and a multiple FOV representation).
- FOV field of view
- a user may view acquired 2D scout images, prior to the imaging system acquiring a 2D long film or 3D long scan based on the 2D scout images.
- the imaging system may display to users how movement of the imaging system correlates to the patient anatomy captured in the 2D scout images.
- the systems and techniques described herein support capturing “scout images” (also referred to herein as “localization images”) prior to a subsequent scan.
- the scout images may provide a user with a survey of a region of interest.
- the scout images may provide anatomical information based on which the system or a user may localize a target patient anatomy (e.g., target spine structures).
- aspects of the present disclosure support using the localization provided by the scout images in conjunction with multiple imaging types (e.g., X-ray imaging, computed tomography (CT) imaging, magnetic resonance imaging (MRI), optical imaging, light detection and ranging (LiDAR) imaging, camera images from an O-arm, preoperative images, intraoperative images, etc.).
- multiple imaging types e.g., X-ray imaging, computed tomography (CT) imaging, magnetic resonance imaging (MRI), optical imaging, light detection and ranging (LiDAR) imaging, camera images from an O-arm, preoperative images, intraoperative images, etc.
- the techniques described herein include capturing an additional scout image at an intermediate position between the start position and stop position. Additionally, or alternatively, the techniques described herein may include capturing multiple additional scout images at respective intermediate positions between the start position and stop position.
- the systems and techniques described herein may provide a preview feature (also referred to herein as a “multi-field of view preview” feature or “field of view preview” feature) that supports user based adjustments to an imaging device (e.g., position adju « ⁇ p"+ « nripntatinn adjustments, etc. to an O-arm imaging device) in association with capturing an anatomy of interest.
- a preview feature also referred to herein as a “multi-field of view preview” feature or “field of view preview” feature
- an imaging device e.g., position adju « ⁇ p"+ « nripntatin adjustments, etc. to an O-arm imaging device
- the systems and techniques described herein may implement the preview feature in response to a user input.
- the systems and techniques may support user based adjustments for ensuring that the entirety of the anatomy of interest will be captured in the final long scan image.
- the preview feature may support adjusting acquired scout images and displaying how adjustments of the imaging device (e.g., O-arm imaging device, etc.) will translate to a new area of the image being captured.
- the preview feature may support adjustments to acquired scout images (e.g., recentering of the scout images, etc.) based on user selected coordinates in one or more of the acquired scout images, and based on the adjustments, the system may provide a preview of which portions of the anatomy will be captured in a long scan image.
- the system may calculate an image acquisition path based on the adjustments and update locations associated with the start position, intermediate position(s), and stop position.
- some long scan workflows may acquire scout images at start and stop positions associated with a long scan, which may fail to account for capturing all the curvature associated with the severely deformed spine.
- the techniques described herein of adding (and in some cases, optionally adding) a scout image between the start and stop positions (e.g., in the middle of the scan) in association with the generation of a long scan image will allow users to ensure that the entirety of the anatomy is captured in the long scan image.
- aspects of the present disclosure may support imaging in association with spine surgery procedures, for example, for deformity cases (e.g., adult deformity, adolescent idiopathic scoliosis, etc.).
- the imaging techniques described herein may support the generation or capture of extended 2D images of the spine (e.g., for visualizing spine alignment) and extended 3D images of the spine (e.g., for navigated spine procedures).
- extended 2D images and extended 3D images may include images captured based on 2D long film and 3D long scan workflows using an imaging device (e.g., an O-arm, etc.). Examples of extended 2D images and extended 3D images include long scan images described herein.
- a long scan image described herein may be acquired based on multiple localization images, and a size of t ⁇ Inna imao? ay be greater than a corresponding size of each localization image at least with respect to a dimension.
- the dimension may correspond to a scan direction associated with acquiring the long scan image.
- Implementations of the present disclosure provide technical solutions to one or more of the problems of radiation exposure to operators, surgeons, and patients.
- X-ray exposure can be quantified by dose, or the amount of energy deposited by radiation in tissue. Ionizing radiation can cause debilitating medical conditions.
- the long scan imaging techniques described herein reduce the risk of generating a long scan image which fails to capture an entire anatomy of interest, reducing risk of additional radiation exposure due to generating an additional long scan image.
- FIGs. 1 A through 1C illustrate examples of a system 100 that support aspects of the present disclosure.
- the system 100 includes a computing device 102, one or more imaging devices 112, a robot 114, a navigation system 118, a database 130, and/or a cloud network 134 (or other network).
- Systems according to other implementations of the present disclosure may include more or fewer components than the system 100.
- the system 100 may omit and/or include additional instances of one or more components of the computing device 102, the imaging device(s) 112, the robot 114, navigation system 118, the database 130, and/or the cloud network 134.
- system 100 may omit any instance of the computing device 102, the imaging device(s) 112, the robot 114, navigation system 118, the database 130, and/or the cloud network 134.
- the system 100 may support the implementation of one or more other aspects of one or more of the methods disclosed herein.
- the computing device 102 includes a processor 104, a memory 106, a communication interface 108, and a user interface 110.
- Computing devices according to other implementations of the present disclosure may include more or fewer components than the computing device 102.
- the computing device 102 may be, for example, a control device including electronic circuitry associated with controlling any of the imaging device 112, the robot 114, and the navigation system 118.
- the processor 104 of the computing device 102 may be any processor described herein or any similar processor.
- the processor 104 may be configured to execute instructions stored in the memory 106, which instructions may cause the processor 104 to carry out one or more computing steps utilizing or based on data received from the imaging devices 112, the robot 114, the navigation system 118, the database 130, and/or the cloud network 134.
- the memory 106 may be or include RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer-readable data and/or instructions.
- the memory 106 may store information or data associated with completing, for example, any step of the process flows 300 and 400 described herein, or of any other methods.
- the memory 106 may store, for example, instructions and/or machine learning models that support one or more functions of the imaging devices 112, the robot 114, and the navigation system 118.
- the memory 106 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 104, enable image processing 120, segmentation 122, transformation 124, registration 128, and/or object detection 129.
- Such content if provided as in instruction, may, in some implementations, be organized into one or more applications, modules, packages, layers, or engines.
- the memory 106 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 104 to carry out the various method and features described herein.
- content or data e.g., machine learning models, artificial neural networks, deep neural networks, etc.
- the data, algorithms, and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging devices 112, the robot 114, the navigation system 118, the database 130, and/or the cloud network 134.
- the computing device 102 may also include a communication interface 108.
- the communication interface 108 may be used for receiving data or other information from an external source (e.g., the imaging devices 112, the robot 114, the navigation system 118, the database 130, the cloud network 134, and/or any other system or component separate from the system 100), and/or for transmitting instructions, data (e.g., image data, etc.), or other information to an external system or device (e.g., another computing device 102, the imaging devices 112, the robot 114, the navigation system 118, the database 130, the cloud network 134, and/or any other system or component not part of the system 100).
- an external source e.g., the imaging devices 112, the robot 114, the navigation system 118, the database 130, the cloud network 134, and/or any other system or component not part of the system 100.
- the communication interface 108 may include one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example tn transmit and/nr receive information via one or more wireless communication protocols such as 802.1 la/b/g/n, Bluetooth, NFC, ZigBee, and so forth).
- the communication interface 108 may support communication between the device 102 and one or more other processors 104 or computing devices 102, whether to reduce the time needed to accomplish a computingintensive task or for any other reason.
- the computing device 102 may also include one or more user interfaces 110.
- the user interface 110 may be or include a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user.
- the user interface 110 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100.
- the user interface 110 may support user modification (e.g., by a surgeon, medical personnel, etc.) of instructions to be executed by the processor 104 according to one or more implementations of the present disclosure, and/or to user modification or adjustment of a setting of other information displayed on the user interface 110 or corresponding thereto.
- user modification e.g., by a surgeon, medical personnel, etc.
- the computing device 102 may utilize a user interface 110 that is housed separately from one or more remaining components of the computing device 102.
- the user interface 110 may be located proximate one or more other components of the computing device 102, while in other implementations, the user interface 110 may be located remotely from one or more other components of the computer device 102.
- the imaging device 112 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.).
- image data refers to the data generated or captured by an imaging device 112, including in a machine-readable form, a graphical/visual form, and in any other form.
- the image data may include data corresponding to an anatomical feature of a patient, or to a portion thereof.
- the image data may be or include a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure.
- a first imaging device 112 may be used to obtain first image data (e.g., a first * first time, and a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time.
- the imaging device 112 may be capable of taking a 2D image or a 3D image to yield the image data.
- the imaging device 112 may be or include, for example, an ultrasound scanner (which may include, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an O-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, or other X-ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may include, for example, a transmitter, a receiver, a processor, and one or more antennae), or any other imaging device 112 suitable for obtaining images of an anatomical feature of a patient 148.
- an ultrasound scanner which may include, for example, a physically separate transducer and receiver, or a single ultrasound transceiver
- O-arm e.g.,
- the imaging device 112 may be contained entirely within a single housing, or may include a transmitter/emitter and a receiver/detector that are in separate housings or are otherwise physically separated. [0067] In some implementations, the imaging device 112 may include more than one imaging device 112. For example, a first imaging device may provide first image data and/or a first image, and a second imaging device may provide second image data and/or a second image. In still other implementations, the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein. The imaging device 112 may be operable to generate a stream of image data.
- the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images.
- image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
- the imaging device 112 may include a source 138, a detector 140, and a collimator 144, example aspects of which are later described with reference to Figs. IB and 1C.
- the robot 114 may be any surgical robot or surgical robotic system.
- the robot 114 may be or include, for example, the Mazor XTM Stealth Edition robotic guidance system.
- the robot 114 may be configured to position the imaging device 112 at one or more precise position(s) and orientation(s), and/or to return the imaging device 112 to the same position(s) and orientation(s) at a later point in time.
- the robot 114 may additionally or alternatively be configured to manipulate a surgical tool (whether based on guidance from the navigation system 118 or nnD tn arrnmnlich nr to assist with a surgical task.
- the robot 114 may be configured to hold and/or manipulate an anatomical element during or in connection with a surgical procedure.
- the robot 114 may include one or more robotic arms 116.
- the robotic arm 116 may include a first robotic arm and a second robotic arm, though the robot 114 may include more than two robotic arms.
- one or more of the robotic arms 116 may be used to hold and/or maneuver the imaging device 112.
- the imaging device 112 includes two or more physically separate components (e.g., a transmitter and receiver)
- one robotic arm 116 may hold one such component
- another robotic arm 116 may hold another such component.
- Each robotic arm 116 may be positionable independently of the other robotic arm.
- the robotic arms 116 may be controlled in a single, shared coordinate space, or in separate coordinate spaces.
- the robot 114 together with the robotic arm 116, may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 116 may be positioned or positionable in any pose, plane, and/or focal point. The pose includes a position and an orientation. As a result, an imaging device 112, surgical tool, or other object held by the robot 114 (or, more specifically, by the robotic arm 116) may be precisely positionable in one or more needed and specific positions and orientations.
- the robotic arm(s) 116 may include one or more sensors that enable the processor 104 (or a processor of the robot 114) to determine a precise pose in space of the robotic arm (as well as any object or element held by or secured to the robotic arm).
- reference markers may be placed on the robot 114 (including, e.g., on the robotic arm 116), the imaging device 112, or any other object in the surgical space.
- the reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof.
- the navigation system 118 can be used to track other components of the system (e.g., imaging device 112) and the system can operate without the use of the robot 114 (e.g., with the surgeon manually manipulating the imaging device 112 and/or one or more surgical tools, based on information and/or instructions generated by the navigation system 118, for example).
- the navigation system 118 may provide navigation for a surgeon and/or a surgical robot during an operation.
- the navigation system 118 may be any now-known or future-developed navigation system, including, for example, the Medtronic
- the navigation system 118 may include one or more cameras or other sensor(s) for tracking one or more reference markers, navigated trackers, or other objects within the operating room or other room in which some or all of the system 100 is located.
- the one or more cameras may be optical cameras, infrared cameras, or other cameras.
- the navigation system 118 may include one or more electromagnetic sensors.
- the navigation system 118 may be used to track a position and orientation (e.g., a pose) of the imaging device 112, the robot 114 and/or robotic arm 116, and/or one or more surgical tools (or, more particularly, to track a pose of a navigated tracker attached, directly or indirectly, in fixed relation to the one or more of the foregoing).
- the navigation system 118 may include a display for displaying one or more images from an external source (e.g., the computing device 102, imaging device 112, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 118.
- the system 100 can operate without the use of the navigation system 118.
- the navigation system 118 may be configured to provide guidance to a surgeon or other user of the system 100 or a component thereof, to the robot 114, or to any other element of the system 100 regarding, for example, a pose of one or more anatomical elements, whether or not a tool is in the proper trajectory, and/or how to move a tool into the proper trajectory to carry out a surgical task according to a preoperative or other surgical plan.
- the processor 104 may utilize data stored in memory 106 as a neural network.
- the neural network may include a machine learning architecture.
- the neural network may be or include one or more classifiers.
- the neural network may be or include any machine learning network such as, for example, a deep learning network, a convolutional neural network, a reconstructive neural network, a generative adversarial neural network, or any other neural network capable of accomplishing functions of the computing device 102 described herein.
- Some elements stored in memory 106 may be described as or referred to as instructions or instruction sets, and some functions of the computing device 102 may be implemented using machine learning techniques.
- the processor 104 may support machine learning model(s) which may be trained and/or updated based on data (e.g., training data) provided or accessed by any of the computing device 102, the imaging device 112, the robot 114, the navigation system 118, the database 130, and/or the cloud network 134.
- the machine learning model(s) may be built and updated by the system 100 based on the training data (also referred to herein as training data and feedback).
- the neural network may generate one or more algorithms (e.g., processing algorithms) supportive of object detection 129.
- algorithms e.g., processing algorithms
- the database 130 may store information that correlates one coordinate system to another (e.g., imaging coordinate systems, robotic coordinate systems, a patient coordinate system, a navigation coordinate system, etc.).
- the database 130 may additionally or alternatively store, for example, one or more surgical plans (including, for example, pose information about a target and/or image information about a patient’s anatomy at and/or proximate the surgical site, for use by the imaging device 112, robot 114, the navigation system 118, and/or a user of the computing device 102 or of the system 100); one or more images useful in connection with a surgery to be completed or analyzed; and/or any other useful information.
- the database 130 may additionally or alternatively store, for example, images captured or generated based on image data provided by the imaging device 112.
- the database 130 may be configured to provide any such information to the computing device 102 or to any other device of the system 100 or external to the system 100, whether directly or via the cloud network 134.
- the database 130 may be or include part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- a hospital image storage system such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- the computing device 102 may communicate with a server(s) and/or a database (e.g., database 130) directly or indirectly over a communications network (e.g., the cloud network 134).
- the communications network may include any type of known communication medium or collection of communication media and may use any type of protocols to transport data between endpoints.
- the communications network may include wired communications technologies, wireless communications technologies, or any combination thereof.
- Wired communications technologies may include, for example, Ethernet-based wired local area network (LAN) connections using physical transmission mediums (e.g., coaxial cable, copper cable/wire, fiber-optic cable, etc.).
- Wireless communications technologies may include, for example, cellular or cellular data connections and protocols (e.g., digital cellular, personal communications service (PCS), cellular digital packet data (CDPD), general packet radio serv’ ⁇ ftTPP st pnhancp data rates for global system for mobile communications (GSM) evolution (EDGE), code division multiple access (CDMA), single-carrier radio transmission technology (1 *RTT), evolution-data optimized (EVDO), high speed packet access (HSPA), universal mobile telecommunications service (UMTS), 3G, long term evolution (LTE), 4G, and/or 5G, etc.), Bluetooth®, Bluetooth® low energy, Wi-Fi, radio, satellite, infrared connections, and/or ZigBee® communication protocols.
- GSM global system for mobile communications
- EDGE code division multiple
- the Internet is an example of the communications network that constitutes an Internet Protocol (IP) network consisting of multiple computers, computing networks, and other communication devices located in multiple locations, and components in the communications network (e.g., computers, computing networks, communication devices) may be connected through one or more telephone systems and other means.
- IP Internet Protocol
- the communications network may include, without limitation, a standard Plain Old Telephone System (POTS), an Integrated Services Digital Network (ISDN), the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a wireless LAN (WLAN), a Session Initiation Protocol (SIP) network, a Voice over Internet Protocol (VoIP) network, a cellular network, and any other type of packet-switched or circuit-switched network known in the art.
- POTS Plain Old Telephone System
- ISDN Integrated Services Digital Network
- PSTN Public Switched Telephone Network
- LAN Local Area Network
- WAN Wide Area Network
- WLAN wireless LAN
- VoIP Voice over Internet Protocol
- the communications network may include of any combination of networks or network types.
- the communications network may include any combination of communication mediums such as coaxial cable, copper cable/wire, fiber-optic cable, or antennas for communicating data (e.g., transmitting/receiving data).
- the computing device 102 may be connected to the cloud network 134 via the communication interface 108, using a wired connection, a wireless connection, or both. In some implementations, the computing device 102 may communicate with the database 130 and/or an external device (e.g., a computing device) via the cloud network 134.
- an external device e.g., a computing device
- the system 100 or similar systems may be used, for example, to carry out one or more aspects of any of the methods 300 and/or 400 described herein.
- the system 100 or similar systems may also be used for other purposes.
- Fig. IB illustrates an example of the system 100 that supports aspects of the present disclosure.
- Fig. 1C illustrates an example of the system 100 that supports aspects of the present disclosure. Aspects of the system 100 previously described with reference to Figs. 1A and descriptions of like elements are omitted for brevity.
- a coordinate sy « tp TM 101 TH mnrdinate system 101 includes three-dimensions including an X-axis, a Y- axis, and a Z-axis.
- the coordinate system 101 may be used to define planes (e.g., the XY-plane, the XZ-plane, and the YZ-plane) of the system 100. These planes may be disposed orthogonal, or at 90 degrees, to one another.
- the origin of the coordinate system 101 may be placed at any point on or near the components of the system 100 (e.g., components of the imaging device 112, for the purposes of description, the axes of the coordinate system 101 are always disposed along the same directions from figure to figure, whether the coordinate system 101 is shown or not. In some examples, reference may be made to dimensions, angles, directions, relative positions, and/or movements associated with one or more components of the system 100 (e.g., imaging device 112) with respect to the coordinate system 101.
- the system 100 may be used to initiate long scans of a patient, adjust imaging components to capture localization images of the patient, set or identify target coordinates associated with the localization images, perform a long scan process based on the localization images and the target coordinates, capture multidimensional images based on the target coordinates, and generating a long scan image based on the multidimensional images.
- the imaging device 112 includes an upper wall or member 152, a lower wall 161 (also referred to herein as member 161), and a pair of sidewalls 156-a and 156-b (also referred to herein as members 156-a and 156-b).
- the imaging device 112 is fixed securable to an operating room surface 168 (such as, for example, a ground surface of an operating room or other room).
- the imaging device 112 may be releasably securable to the operating room wall 168 or may be a standalone component that is simply supported by the operating room wall 168.
- a table 150 configured to support the patient 148 may be positioned orthogonally to the imaging device 112, such that the table 150 extends in a first direction from the imaging device 112.
- the table 150 may be mounted to the imaging device 112.
- the table 150 may be releasably mounted to the imaging device 112.
- the table 150 may not be attached to the imaging device 112.
- the table 150 may be supported and/or mounted to an operating room wall, for example.
- the table 150 may be mounted to the imaging device 112 such that a pose of the table 150 relative to the imaging de' ⁇ cp 1 1 ? i « «A1 Acti ely adjustable.
- the patient 148 may be positioned on the table 150 in a supine position, a prone position, a recumbent position, and the like.
- the table 150 may be any operating table configured to support the patient 148 during a surgical procedure.
- the table 150 may include any accessories mounted to or otherwise coupled to the table 150 such as, for example, a bed rail, a bed rail adaptor, an arm rest, an extender, or the like.
- the table 150 may be stationary or may be operable to maneuver the patient 148 (e.g., the table 150 may be moveable).
- the table 150 has two positioning degrees of freedom and one rotational degree of freedom, which allows positioning of the specific anatomy of the patient anywhere in space (within a volume defined by the limits of movement of the table 150).
- the table 150 may slide forward and backward and from side to side, tilt (e.g., around an axis positioned between the head and foot of the table 150 and extending from one side of the table 150 to the other) and/or roll (e.g., around an axis positioned between the two sides of the table 150 and extending from the head of the table 150 to the foot thereof).
- the table 150 may be bendable at one or more areas (which bending may be possible due to, for example, the use of a flexible surface for the table 150, or by physically separating one portion of the table 150 from another portion of the table 150 and moving the two portions independently).
- the table 150 may be manually moved or manipulated by, for example, a surgeon or other user, or the table 150 may include one or more motors, actuators, and/or other mechanisms configured to enable movement and/or manipulation of the table 150 by a processor such as a processor 104 of the computing device 102.
- the imaging device 112 includes a gantry.
- the gantry may be or include a substantially circular, or “O-shaped,” housing that enables imaging of objects placed into an isocenter thereof.
- the gantry may be positioned around the object being imaged.
- the gantry may be disposed at least partially within the member 152, the sidewall 156-a, the sidewall 156-b, and the lower wall 161 of the imaging device 112.
- the imaging device 112 also includes a source 138 and a detector 140.
- the source 138 may be a device configured to generate and emit radiation
- the detector 140 may be a device configured to detect the emitted radiation.
- the source 138 and the detector 140 may be or include an imaging source and an imaging detector (e.g., the source 138 and the detector 140 are used to generate data useful for producing images).
- the source 138 TM FA nn «itinnprl i n a first position and the detector 140 may be positioned in a second position opposite the source 138.
- the source 138 may include an X-ray source (e.g., a thermionic emission tube, a cold emission x-ray tube, or the like).
- the source 138 may project a radiation beam that passes through the patient 148 and onto the detector 140 located on the opposite side of the imaging device 112.
- the detector 140 may be or include one or more sensors that receive the radiation beam (e.g., once the radiation beam has passed through the patient 148) and transmit information related to the radiation beam to one or more other components (e.g., processor 104) of the system 100 for processing.
- the detector 140 may include an array.
- the detector 140 may include three 2D flat panel solid-state detectors arranged side-by-side, and angled to approximate the curvature of the imaging device 112. It will be understood, however, that various detectors and detector arrays can be used with the imaging device 112, including any detector configurations used in typical diagnostic fan-beam or conebeam CT scanners.
- the detector 140 may include a 2D thin-film transistor X- ray detector using scintillator amorphous-silicon technology.
- the source 138 may be or include a radiation tube (e.g., an x-ray tube) capable of generating the radiation beam.
- the source 138 and/or the detector 140 may include a collimator 144 configured to confine or shape the radiation beam emitted from the source 138 and received at the detector 140.
- the signals output from the detector 140 may be processed by the processor 104 to generate a reconstructed image of the patient tissue. In this way, the imaging device 112 can effectively generate reconstructed images of the patient tissue imaged by the source 138 and the detector 140.
- the source 138 and the detector 140 may be attached to the gantry and configured to rotate 360 degrees around the patient 148 in a continuous or step-wise manner so that the radiation beam can be projected through the patient 148 at various angles.
- the source 138 and the detector 140 may rotate, spin, or otherwise revolve about an axis that passes through the top and bottom of the patient 148, with the patient anatomy that is the subject of the imaging positioned at the isocenter of the imaging device 112.
- the rotation may occur through a drive mechanism that causes the gantry to move such that the source 138 and the detector 140 encircle the patient 148 on the table 150.
- the radiation beam passes through and is attenuated by the patient 148.
- the detected radiation from each of the projection angles can then be processed, using various reconstruction techniques, to produce a 2D or 3D reconstruction image of the patient 148.
- the processor 104 may be used to perform image processing 120 to generate the reconstruction image.
- the source 138 and the detector 140 may move along a length of the patient 148, as depicted in Fig. IB.
- the table 150 holding the patient 148 may move in the direction of arrow 136 while the source 138 and detector 140 remain in a fixed location, such that the length of the patient can be scanned.
- the scanned data may be used to generate one or more reconstructed images of the patient 148 and/or a long scan image of the patient 148.
- the imaging device 112 may be included in the O-Arm® imaging system sold by Medtronic Navigation, Inc. having a place of business in Louisville, Colo., USA.
- the imaging device 112, including the O-Arm® imaging system, or other appropriate imaging systems supportive of aspects of the present disclosure can be found in U.S. Pat. Nos. 7,188,998, 7,108,421, 7,106,825, 7,001,045 and 6,940,941, each of which is incorporated herein by reference.
- the O-Arm ® imaging system can include a mobile cart (not illustrated) that supports movement of the imaging device 112 from one operating theater or room to another, and the gantry may move relative to the mobile cart.
- the system 100 may perform a long scan process by moving the source 138 and the detector 140 along a direction of an axis running through the patient 148.
- the direction of the axis along which the source 138 and the detector 140 move may be the same direction as (e.g., extend in a direction parallel to) the direction indicated by the arrow 136.
- the long scan process may include moving the patient 148 relative to the source 138 and the detector 140.
- the patient 148 may be positioned in a prone position on the table 150.
- the system 100 may move the table 150 through an isocenter of the imaging device 112, in the direction (or opposite the direction) indicated by the arrow 136, such that the source 138 and the detector 140 generate projection data along a length of the patient 148.
- the system 100 may move the source 138 and the detector 140 relative to the patient 148 (or move the table 150 and patient 148 relative to the source 138 and the detector 140) at a predetermined rate and/or a fixed rate.
- the system 100 may support scanning a patient anatomy (not illustrated) of the patient 148. Examples of the patient anatomy are illustrated at Figs. 2A through 2F (e.g., patient anatomy 210).
- Fig. 2A illustrates an example 200-a of localization images 205 supported by aspects of the present disclosure.
- Fig. 2B illustrates an example 200-b of a long scan image 206-a supported by aspects of the present disclosure. Aspects of Fig. 2A and Fig. 2B are described with reference to Fig. 1C.
- the system 100 may support the acquisition of localization images 205 (also referred to herein as “scout images” described herein).
- the localization images 205 may provide anatomical information based on which the system 100 or a user may localize a target patient anatomy (e.g., patient anatomy 210).
- the length of the scan may be configured by setting a start position (e.g., position 160-a associated with capturing a region 165-a) and a stop position (e.g., position 160-c associated with capturing a region 165-c) associated with the long scan process.
- position 160-a may be the center of region 165-a
- position 160-c may be the center of region 165-c.
- Region 165-a and region 165-c illustrate examples of what is captured in the localization image 205-a and the localization image 205-c.
- the patient anatomy 210 may be the spine of a patient 148, and when performing the long scan of the patient anatomy 210 (e.g., spine) to analyze spinal deformities, the long scan may fail to image a deformity (e.g., curvature 211) due to the deformity being absent from the regions 165-a and 165-c associated with the start and stop positions.
- a deformity e.g., curvature 211
- a portion of the deformity (e.g., curvature 211) between the start and stop positions may fail to be captured in the resultant long scan image 206-a.
- a consequence associated with missing the deformity (e.g., curvature 211) in the image may include a retaking of an additional long scan image, which would result in increased radiation exposure (e.g., radiation dose) to the patient 148 due to the rescan.
- Another example consequence associated with missing the deformity is increased surgery time due to time associated with retaking the long scan image.
- aspects of the present disclosure support capturing one or more additional localization images 205 (e.g., localization image 205-b, also referred to herein as an intermediate localization image) between the start and stop positions.
- the systems and technique dpcrrihpd hprpin ma y provide a preview feature (also referred to herein as a “multi-field of view preview” feature or “field of view preview” feature) which displays localization image 205-a through localization image 205-c.
- the systems and techniques support user selection of target coordinates (represented by a ‘+’ at Figs. 2C and 2F).
- the system 100 may capture multidimensional images based on the target coordinates, and according to example aspects described herein, generate a long scan image 206-b from the multidimensional images.
- the systems and techniques described herein support a preview associated with capturing a long scan image 206-b, which will allow users to ensure that the entirety of the patient anatomy 210 will be captured in the long scan image 206-b. Accordingly, for example, the systems and techniques described herein may prevent additional radiation exposure to the patient 148 (and operators) due to additional long scans, reduce time associated with imaging procedures, reduce time associated with medical procedures, and the like.
- Fig. 3 illustrates an example of a process flow 300 in accordance with aspects of the present disclosure.
- process flow 300 may be implemented by aspects of the system 100 described herein.
- the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow 300, or other operations may be added to the process flow 300.
- any device e.g., computing device 102, imaging device 112, etc.
- any device e.g., computing device 102, imaging device 112, etc.
- the system 100 may perform the operations shown.
- the process flow 300 may include setting one or more positions associated with performing a preview scan process.
- the preview scan process may be associated with scanning a patient anatomy 210 associated with patient 148.
- setting the one or more positions may include setting a start position (e.g., position 160-a), one or more intermediate positions (e.g., position 160-b), and a stop position (e.g., position 160-a) associated with performing the preview scan process.
- the one or more intermediate positions e.g., position 160-b, etc.
- the stop position e.g., position 160-c
- an axis e.g., X-axis
- the patient anatomy 210 may include a spinal structure of the patient 148, but aspects of the present disclosure are not limited thereto.
- the imaging techniques described herein may be implemented in association with generating long scans of other anatomical elements (e.g., soft tissue, tumors, other bone structures, etc.) of patient 148.
- the process flow 300 may include performing the preview scan process.
- the process flow 300 may include capturing localization images 205 of the patient anatomy 210, in which the localization images 205 are captured respective to the start position (e.g., position 160-a), the one or more intermediate positions (e.g., position 160-b), and the stop position (e.g., position 160-a).
- start position e.g., position 160-a
- intermediate positions e.g., position 160-b
- stop position e.g., position 160-a
- the system 100 may capture a localization image 205-a respective to position 160-a, capture a localization image 205-b (also referred to herein as an intermediate localization image) respective to position 160-b, and capture a localization image 205-c respective to position 160-c.
- the localization images 205 may be scout images as described herein.
- the system 100 may utilize the localization images 205 to define the scan range of a subsequent scan (e.g., an X-ray scan, a CT scan, an MRI scan, etc.) implemented in association with a long scan process. Examples of the long scan process are later described with reference to 330.
- a subsequent scan e.g., an X-ray scan, a CT scan, an MRI scan, etc.
- Examples of the long scan process are later described with reference to 330.
- the localization image 205-a through localization image 205-c may each include a respective portion of the patient anatomy 210.
- the localization images 205 may include image data that respectively corresponds to regions 165 (as illustrated in Fig. 1C) and corresponding portions of the patient anatomy 210 (as illustrated in Figs. 2B through 2D).
- the localization images 205 may be overlapping or non-overlapping.
- a localization image 205 e.g., localization image 205-b, etc.
- may at least partially overlap one or more other localization images 205 e.g., localization image 205-a, localization image 205-c, etc.
- a portion of the patient anatomy 210 may be captured in both localization images 205.
- the localization images 205 may be equal or different in size.
- a localization image 205 e.g., localization image 205-b, etc.
- another localization image 205 e.g., localization image 205- a, localization image 205-c, etc.
- FIG. 2C through 2F it is to be understood that although a single intermediate localization image (e.g., localization image 205-b) is illustrated in Figs. 2C through 2F, aspects of the present disclosure are not limited thereto.
- the systems and techniques described herein support capturing any quantity of intermediate localization images corresponding to respective positions located between the start position (e.g., position 160-a) and the stop position (e.g., position 160-c) with respect to an axis (e.g., X-axis).
- Each of the localization images 205 may be an X-ray image, a CT image, an MRI image, an optical image, or a LiDAR image.
- the localization images 205 may include any combination of image types.
- the localization images 205 may be of the same image type.
- two or more of the localization images 205 may be of different image types.
- the localization image 205-a may be a first image type (e.g., X-ray image)
- the localization image 205-b and/or localization image 205-c may be a second image type (e.g., CT image, MRI image, optical image, LiDAR image, etc.).
- the systems and techniques described herein may support merging image data of the first imaging type (e.g., X-ray imaging data) with image data of the second imaging type (e.g., CT imaging data, MRI imaging data, etc.).
- aspects of the present disclosure support setting any quantity of intermediate positions between the start position (e.g., position 160-a) and the stop position (e.g., position 160-c), and further, capturing intermediate localization images respective to the intermediate positions. It is to be understood that any of the aspects described herein association with respect to a singular intermediate localization image (e.g., localization image 205-b) may support implementations based on multiple intermediate localization images.
- the process flow 300 may include generating a multiple field of view representation (e.g., multiple field of view representation 201-a illustrated in Fig. 2C, multiple field of view representation 201-b illustrated in Fig. 2D, multiple field of view representation 203 illustrated in Fig. 2F, etc.) of the patient anatomy 210 in response to performing the preview scan process.
- a multiple field of view representation e.g., multiple field of view representation 201-a illustrated in Fig. 2C, multiple field of view representation 201-b illustrated in Fig. 2D, multiple field of view representation 203 illustrated in Fig. 2F, etc.
- the multiple field of view representation may display a full scan of a patient anatomy 210 in combination with indicators u dashed lines) corresponding to boundaries of the localization images 205, as illustrated in Figs. 2C and 2D. In some other examples, the multiple field of view representation may display only the image data of the localization images 205, as illustrated in Fig. 2F. It is to be understood that any of the aspects described herein with reference to the process flow 300 may be applied to any of the multiple field of view representations (e.g., multiple field of view representation 201 -a, multiple field of view representation 201-b, multiple field of view representation 203).
- the system 100 may provide synchronized navigation of the localization image 205-a, the localization image 205-b, and the localization image 205-c.
- the system 100 may display the localization images 205 such that the movement is simultaneously reflected at all of the localization images 205.
- the image data within the boundary boxes (represented by dotted lines in Figs. 2C through 2F) of the localization images 205 may move together in synchronization.
- the process flow 300 may include setting or identifying target coordinates (represented by a ‘+’ at Figs. 2C and 2F) associated with any or all of the localization images 205.
- the system 100 may set or identify the target coordinates in response to a user input (at 321) associated with a localization image 205.
- the user input may be a user selection (e.g., via a touchscreen displaying the localization image 205, using a mouse or controller input, etc.) of the target coordinates.
- the user input may include a user selection of the target region 220 (also referred to herein as a region of interest) included in the localization image 205, and the target coordinates may correspond to the target region 220, be located inside of the target region 220, or the like. Examples of the target region 220 are illustrated at Fig. 2C.
- the user input may include a user selection of a target feature 225 (also referred to herein as an object of interest) of the patient anatomy 210.
- the system 100 may set target region 220-b corresponding to the target feature 225.
- the target region 220 may completely surround the target feature 225. Aspects of the present disclosure are not limited thereto, and the target feature 225 and the target region 220 may be different in size or equal in size.
- the system 100 may recapture the localization images 205-a through 205-c based on the target coordinates and display updates to the localization images 205-a through 205-c.
- the system 100 may shift the capture coordinates of the localization images 205-a through 205-c with respect to one or more axes (e.g., with respect to the Y-axis) based on the target coordinates.
- Fig. 2D illustrates an example outcome of setting or adjusting the target coordinates in association with capturing patient anatomy (e.g., curvature 211 of the patient anatomy 210) that may potentially fail to be captured in a long scan process.
- the system 100 may support user confirmation of the updates and/or further user input (e.g., reselection of target coordinates, etc.).
- the system 100 may perform a long scan process using the target coordinates, example aspects of which are later described herein with reference to 330 of the process flow 300.
- the system 100 may capture multidimensional images based on the localization images 205 and the respective target coordinates for each localization image 205 as described with reference to Fig. 2C, such that each multidimensional image is centered with respect to the respective target coordinates.
- the system 100 may set or calculate an image acquisition path for performing the long scan process.
- the systems and techniques may support user selection of desired centers of localization images 205. Accordingly, for example, by setting the desired centers of localization images 205, the user may set desired centers of corresponding multidimensional images to be captured in the long scan process.
- the system 100 may support Al and computer vision based selection of the target coordinates for any or all of the localization images 205. For example, with reference to the example at Fig. 2C, at 322 of the process flow 300, the system 100 may detect a target feature 225 (e.g., using object detection 129 described with reference to Fig. 1A). At 323, the system 100 may display an indicator (e.g., highlighting, outline, etc.) corresponding to the target feature 225 and/or an indicator (e.g., a circle, a dotted circle, a rectangle, etc.) corresponding to a target region 220-b.
- an indicator e.g., highlighting, outline, etc.
- an indicator e.g., a circle, a dotted circle, a rectangle, etc.
- the system 100 may set candidate target coordinates (e.g., represented by indicator ‘+’) in association with the target feature 27 [0133]
- the system 100 may alert the user of a candidate object of interest (e.g., the target feature 225) included in the localization image 205-b and the candidate target coordinates.
- the system 100 may support features for user confirmation (e.g., approval, denial) and user modification of the candidate target coordinates.
- the system 100 may identify and/or set the target coordinates without a user input.
- the process flow 300 may include performing a long scan process associated with scanning the patient 148 based on the localization image 205-a, the localization image 205-b (or multiple intermediate localization images), and the localization image 205-c.
- the system 100 may capture multidimensional images (e.g., 2D or 3D) including the patient anatomy 210 based on the localization image 205-a, the localization image 205-b, and the localization image 205-c.
- the multidimensional images may be X-ray images, CT images, or MRI images, but are not limited thereto.
- the system 100 may capture multidimensional images based on the target coordinates associated with the localization images 205. For example, the system 100 may capture the multidimensional images such that the multidimensional images are centered with respect to the respective target coordinates described with reference to 325. In an example, the system 100 may capture a multidimensional image corresponding to a localization image 205 (e.g., localization image 205-a, localization image 205-b, etc.) such that the multidimensional image is centered with respect to the respective target coordinates (‘ +’) as set by a user and/or the system 100.
- a localization image 205 e.g., localization image 205-a, localization image 205-b, etc.
- the system 100 may generate a long scan image 206-b including the patient anatomy 210 as described herein.
- the system 100 may generate the long scan image 206-b based on the multidimensional images captured at 335.
- the system 100 may merge the data from the multidimensional images into the long scan image 206-b (e.g., capture and “stitch” together the multidimensional images).
- the systems and techniques described herein support operator based adjustment to localization images 205 (e.g., user selection or confirmation of target coordinates, target regions 220, target features 225, etc.), and the system 100 may capture and stich together multidimensional images which correspond to the localization images 205.
- Aspects of the present disclosure may be implemented based on localization images 205 of any image type.
- the localization images 205 may include localization images captured from any angle (e.g., any angle with respect to the patient 148) or any plane of orientation.
- Non-limiting examples of the image types include posterior-anterior images, anterior-posterior images, lateral images, oblique images, axial images, coronal images, sagittal images, and the like.
- aspects of the present disclosure support setting target coordinates (and shifting capture coordinates based on the target coordinates) with respect to any axis of the coordinate system 101.
- the localization images 205 may be posterior-anterior images, and the system 100 may support setting target coordinates (and shifting capture coordinates based on the target coordinates) with respect to the X-axis and Y-axis of the coordinate system 101.
- the localization images 205 may be lateral images of the patient 148, and the system 100 may support setting target coordinates (and shifting capture coordinates based on the target coordinates) with respect to the X-axis and Z-axis of the coordinate system 101.
- the systems and techniques described herein support generating a long scan image 206-b which captures a portion (e.g., a curvature 211) of the patient anatomy 210 that might otherwise fail to be captured.
- the systems and techniques described herein support setting parameters associated with the long scan process.
- the system 100 may adjust one or more parameters associated with the imaging device 112 such that radiation emitted from the source 138 is focused on the target coordinates associated with each localization image 205.
- the system 100 may support setting parameters of an image acquisition path associated with the long scan process, and the image acquisition path may be a fixed path or modifiable. For example, the system 100 may calculate one or more parameters associated with an image acquisition path based on the localization image 205-a, the localization image 205-b, and the localization image 205-c, and the system 100 may perform the long scan process based on the one or more parameters and the image acquisition path.
- the image acquisition path associated with capturing the localization images 205 and the multidimensional images may be a linear nath fTM- PY ample, due to features of the 0-arm associated with capturing images.
- the system 100 may identify image capture coordinates along the linear path for capturing the multidimensional images.
- the system 100 may set the image capture coordinates for capturing the multidimensional images, in which the image capture coordinates correspond to (e.g., with reference to the X-axis and/or Y-axis) the target coordinates (‘+’) associated with the localization images 205.
- the image capture coordinates may be different from respective centers of the localization images 205.
- the system 100 may set an image acquisition path for capturing images with respect to three dimensions, and the image acquisition path may include image capture coordinates and/or device movement with respect to any of the X, Y, and Z axes.
- FIG. 4 illustrates an example of a process flow 400 in accordance with aspects of the present disclosure.
- process flow 400 may be implemented by aspects of the system 100 described herein.
- the operations may be performed in a different order than the order shown, or the operations may be performed in different orders or at different times. Certain operations may also be left out of the process flow 400, or other operations may be added to the process flow 400.
- any device e.g., computing device 102, imaging device 112, etc.
- any device e.g., computing device 102, imaging device 112, etc.
- the system 100 may perform the operations shown.
- the process flow 400 may include performing a preview scan process associated with scanning a patient anatomy.
- performing the preview scan process includes (at 415): capturing a first localization image based on a start position associated with the preview scan process; capturing a second localization image based on a stop position associated with the preview scan process; and capturing one or more intermediate localization images based on one or more intermediate positions different from the start position and the stop position.
- the one or more intermediate positions are between the start position and the Stop position with rnsnnct tn a first avi s
- the first localization image includes at least a first portion of the patient anatomy
- the second localization image includes at least a second portion of the patient anatomy
- the one or more intermediate localization images include at least a third portion of the patient anatomy.
- the first localization image, the second localization image, and the one or more intermediate localization images each include an X-ray image, a CT image, an MRI image, an optical image, or a LiDAR image.
- the process flow 400 may include generating a multiple field of view representation of the patient anatomy in response to performing the preview scan process.
- generating the multiple field of view representation includes providing synchronized navigation of the first localization image, the second localization image, and the one or more intermediate localization images.
- the process flow 400 may include identifying target coordinates corresponding to a target region included in at least one localization image among the first localization image, the second localization image, and the one or more intermediate localization images, wherein performing the long scan process is based on the target coordinates.
- identifying the target coordinates is in response to a user selection of at least one of the target region included in the at least one localization image; and a target portion of the patient anatomy, wherein the target portion is included in the target region.
- identifying the target coordinates is in response to detecting a target feature included in the target region, wherein the target feature is associated with the patient anatomy.
- the target region included in the at least one localization image includes at least a portion of the patient anatomy.
- the process flow 400 may include performing a long scan process associated with scanning the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images.
- performing the long scan process may include (at 435): capturing a set of multidimensional images including the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images.
- the set of multidimensional images may include an X-ray image, a CT image, or an MRI image.
- the process flow 400 may include generating a long scan image including the patient anatomy based on merging data associated with the set of multidimensional images.
- the patient anatomy includes a spinal structure.
- the long scan image depicts at least a curved portion of the patient anatomy.
- the patient anatomy includes soft tissue (e.g., tumors, etc.).
- the process flow 400 may include calculating one or more parameters associated with an image acquisition path based on the first localization image, the second localization image, and the one or more intermediate localization images.
- the process flow 400 may include calculating the one or more parameters associated with the image acquisition path based on the target coordinates.
- the process flow 400 may include performing the long scan process (at 430) associated with scanning the patient anatomy based on the one or more parameters associated with the image acquisition path.
- the process flow 400 (and/or one or more operations thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to the processor(s) 104 of the computing device 102 described above.
- the at least one processor may be part of an imaging system (including the imaging device 112), a robot (such as a robot 114) or part of a navigation system (such as a navigation system 118).
- a processor other than any processor described herein may also be used to execute the process flow 400.
- the at least one processor may perform operations of the process flow 400 by executing elements stored in a memory such as the memory 106.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more operations of a function as shown in the process flow 400.
- One or more portions of the process flow 400 may be performed by the processor executing any of the contents of memory, such as image processing 120, a segmentation 122, a transformation 124, a registration 128, and/or object detection 129.
- the present disclosure encompasses methods with fewer than all of the features identified in Figs. 3 and 4 (and the corresponding description of the process flows 300 and 400), as well as methods that include additional features beyond those identified in Figs. 3 and 4 (and the mrrp «nnndina Hpcrription of the process flows 300 and 400).
- the present disclosure also encompasses methods that include one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or include a registration or any other correlation.
- Example aspects of the present disclosure include:
- a system including: a processor; and a memory storing instructions thereon that, when executed by the processor, cause the processor to: perform a preview scan process associated with scanning a patient anatomy, wherein performing the preview scan process includes: capturing a first localization image based on a start position associated with the preview scan process; capturing a second localization image based on a stop position associated with the preview scan process; and capturing one or more intermediate localization images based on one o r intArmpdiatp nositions different from the start position and the stop position; and perform a long scan process associated with scanning the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images.
- the first localization image includes at least a first portion of the patient anatomy
- the second localization image includes at least a second portion of the patient anatomy
- the one or more intermediate localization images include at least a third portion of the patient anatomy.
- identifying the target coordinates is in response to a user selection of at least one of: the target region included in the at least one localization image; and a target portion of the patient anatomy, wherein the target portion is included in the target region.
- identifying the target coordinates is in response to detecting a target feature included in the target region, wherein the target feature is associated with the patient anatomy.
- the target region included in the at least one localization image includes at least a portion of the patient anatomy.
- any of the aspects herein, wherein the instructions executable by the processor to perform the long scan process are further executable by the processor to: capture a set of multidimensional images including the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images; and generate a long scan image including the patient anatomy based on merging data associated with the set of multidimensional images.
- any of the aspects herein, wherein the set of multidimensional images include an X-ray image, a computed tomography (CT) image, or a magnetic resonance imaging (MRI) image.
- CT computed tomography
- MRI magnetic resonance imaging
- the patient anatomy includes soft tissue.
- the first localization image, the second localization image, and the one or more intermediate localization images each include an X-ray image, a computed tomography (CT) image, a magnetic resonance imaging (MRI) image, an optical image, or a light detection and ranging (LiDAR) image.
- CT computed tomography
- MRI magnetic resonance imaging
- LiDAR light detection and ranging
- An imaging device a processor coupled with the imaging device; and memory coupled with the processor and storing instructions thereon that, when executed by the processor, enable the processor to: perform a preview scan process associated with scanning a patient anatomy, wherein performing the preview scan process includes: capturing a first localization image based on a start position associated with the preview scan process; capturing a second localization image based on a stop position associated with the preview scan process; and capturing one or more intermediate localization images based on one or more intermediate positions different from the start position and the stop position; and perform a long scan prnrp « « wi th scanning the patient anatomy using the imaging device, wherein performing the long scan process is based on the first localization image, the second localization image, and the one or more intermediate localization images.
- the imaging device includes at least one of an O-arm and a C-arm.
- a method including: performing a preview scan process associated with scanning a patient anatomy, wherein performing the preview scan process includes: capturing a first localization image based on a start position associated with the preview scan process; capturing a second localization image based on a stop position associated with the preview scan process; and capturing one or more intermediate localization images based on one or more intermediate positions different form the start position and the stop position; and performing a long scan process associated with scanning the patient anatomy based on the first localization image, the second localization image, and the one or more intermediate localization images.
- any of the aspects herein further including: generating a multiple field of view representation of the patient anatomy in response to performing the preview scan process, wherein generating the multiple field of view representation includes providing synchronized navigation of the first localization image, the second localization image, and the one or more intermediate localization images.
- each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
- aspects of the present disclosure may take the form of an implementation that is entirely hardware, an implementation that is entirely software (including firmware, resident software, micro-code, etc.) or an implementation combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Any combination of one or more computer-readable medium(s) may be utilized.
- the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
- a computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random acc p « « mpmnrv ft? A Mt a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a computer- readable signal medium may be any computer-readable medium that is not a computer- readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Pulmonology (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
L'invention concerne un système pouvant effectuer un processus de balayage de prévisualisation associé au balayage de l'anatomie d'un patient. La réalisation du processus de balayage de prévisualisation peut consister à capturer une première image de localisation sur la base d'une position de départ associée au processus de balayage de prévisualisation, à capturer une seconde image de localisation sur la base d'une position d'arrêt associée au processus de balayage de prévisualisation, et à capturer une ou plusieurs images de localisation intermédiaires sur la base d'une ou plusieurs positions intermédiaires différentes de la position de départ et de la position d'arrêt. Le système peut générer une représentation multi-champ de vision de l'anatomie du patient en réponse à la réalisation du processus de balayage de prévisualisation. Le système peut effectuer un processus de balayage long associé au balayage du patient sur la base de la première image de localisation, de la seconde image de localisation et de la ou des images de localisation. La réalisation du processus de balayage long peut comprendre la capture d'images multidimensionnelles comprenant l'anatomie du patient, sur la base des images de localisation.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363466642P | 2023-05-15 | 2023-05-15 | |
| US63/466,642 | 2023-05-15 | ||
| US18/626,141 US20240382169A1 (en) | 2023-05-15 | 2024-04-03 | Long image multi-field of view preview |
| US18/626,141 | 2024-04-03 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024238179A1 true WO2024238179A1 (fr) | 2024-11-21 |
Family
ID=91274563
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/027919 Pending WO2024238179A1 (fr) | 2023-05-15 | 2024-05-06 | Prévisualisation multi-champ de vision d'image longue |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024238179A1 (fr) |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6940941B2 (en) | 2002-02-15 | 2005-09-06 | Breakaway Imaging, Llc | Breakable gantry apparatus for multidimensional x-ray based imaging |
| US7001045B2 (en) | 2002-06-11 | 2006-02-21 | Breakaway Imaging, Llc | Cantilevered gantry apparatus for x-ray imaging |
| US7106825B2 (en) | 2002-08-21 | 2006-09-12 | Breakaway Imaging, Llc | Apparatus and method for reconstruction of volumetric images in a divergent scanning computed tomography system |
| US7108421B2 (en) | 2002-03-19 | 2006-09-19 | Breakaway Imaging, Llc | Systems and methods for imaging large field-of-view objects |
| US7188998B2 (en) | 2002-03-13 | 2007-03-13 | Breakaway Imaging, Llc | Systems and methods for quasi-simultaneous multi-planar x-ray imaging |
| US20140098932A1 (en) * | 2012-10-04 | 2014-04-10 | General Electric Company | Dual display ct scanner user interface |
| US20200258243A1 (en) * | 2019-02-07 | 2020-08-13 | Siemens Healthcare Gmbh | Dense Body Marker Estimation from Camera Data for Patient Positioning in Medical Imaging |
| US20210150704A1 (en) * | 2019-11-15 | 2021-05-20 | GE Precision Healthcare LLC | Methods and systems for a field-of-view preview |
| EP4159129A1 (fr) * | 2021-10-01 | 2023-04-05 | Koninklijke Philips N.V. | Procédé d'imagerie et d'analyse médicale |
-
2024
- 2024-05-06 WO PCT/US2024/027919 patent/WO2024238179A1/fr active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6940941B2 (en) | 2002-02-15 | 2005-09-06 | Breakaway Imaging, Llc | Breakable gantry apparatus for multidimensional x-ray based imaging |
| US7188998B2 (en) | 2002-03-13 | 2007-03-13 | Breakaway Imaging, Llc | Systems and methods for quasi-simultaneous multi-planar x-ray imaging |
| US7108421B2 (en) | 2002-03-19 | 2006-09-19 | Breakaway Imaging, Llc | Systems and methods for imaging large field-of-view objects |
| US7001045B2 (en) | 2002-06-11 | 2006-02-21 | Breakaway Imaging, Llc | Cantilevered gantry apparatus for x-ray imaging |
| US7106825B2 (en) | 2002-08-21 | 2006-09-12 | Breakaway Imaging, Llc | Apparatus and method for reconstruction of volumetric images in a divergent scanning computed tomography system |
| US20140098932A1 (en) * | 2012-10-04 | 2014-04-10 | General Electric Company | Dual display ct scanner user interface |
| US20200258243A1 (en) * | 2019-02-07 | 2020-08-13 | Siemens Healthcare Gmbh | Dense Body Marker Estimation from Camera Data for Patient Positioning in Medical Imaging |
| US20210150704A1 (en) * | 2019-11-15 | 2021-05-20 | GE Precision Healthcare LLC | Methods and systems for a field-of-view preview |
| EP4159129A1 (fr) * | 2021-10-01 | 2023-04-05 | Koninklijke Philips N.V. | Procédé d'imagerie et d'analyse médicale |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240382265A1 (en) | Hybrid localization for minimally invasive surgery and cervical spinal referencing, and methods for using the same | |
| US20240404129A1 (en) | Systems, methods, and devices for generating a corrected image | |
| US12263026B2 (en) | Systems, methods, and devices for multiple exposures imaging | |
| US20240382169A1 (en) | Long image multi-field of view preview | |
| US20240398362A1 (en) | Ultra-wide 2d scout images for field of view preview | |
| WO2024238179A1 (fr) | Prévisualisation multi-champ de vision d'image longue | |
| US20240407745A1 (en) | Touch and move anatomy localization | |
| WO2024249025A1 (fr) | Images de repérage 2d ultralarges pour prévisualisation de champ de vision | |
| WO2024254040A1 (fr) | Localisation d'anatomie par toucher-déplacer | |
| WO2024229651A1 (fr) | Positionnement intelligent d'un chariot de bras de robot | |
| US20240156531A1 (en) | Method for creating a surgical plan based on an ultrasound view | |
| WO2024246897A1 (fr) | Systèmes et procédés de réglage de balayage long et de suivi anatomique | |
| US20240390700A1 (en) | Systems and methods for noise and patient dose optimization via dynamic x-ray modulation | |
| US20240341601A1 (en) | Surgical positioning methods and methods for determining regions subject to radiation | |
| WO2023141800A1 (fr) | Système de positionnement de rayons x mobile | |
| US12004821B2 (en) | Systems, methods, and devices for generating a hybrid image | |
| US11847809B2 (en) | Systems, devices, and methods for identifying and locating a region of interest | |
| US12182929B2 (en) | Systems and methods for volume reconstructions using a priori patient data | |
| WO2025079075A1 (fr) | Caméra de navigation suivante | |
| WO2024229649A1 (fr) | Dispositif de suivi de patient non invasif pour intervention chirurgicale | |
| WO2024246699A1 (fr) | Optimisation de bruit et de dose patient par modulation de rayons x dynamique | |
| US20220401056A1 (en) | System and method of guidance input detection and surgical equipment positioning | |
| WO2024214068A1 (fr) | Procédés de positionnement chirurgical et procédés de détermination de régions soumises à un rayonnement | |
| WO2025120637A1 (fr) | Systèmes et procédés de planification et de mise à jour de trajectoires pour dispositifs d'imagerie | |
| WO2024236440A1 (fr) | Localisation hybride pour chirurgie minimalement invasive et référencement spinal cervical, et leurs procédés d'utilisation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24728836 Country of ref document: EP Kind code of ref document: A1 |