[go: up one dir, main page]

WO2025004041A1 - Systèmes et procédés d'étalonnage d'un dispositif d'imagerie - Google Patents

Systèmes et procédés d'étalonnage d'un dispositif d'imagerie Download PDF

Info

Publication number
WO2025004041A1
WO2025004041A1 PCT/IL2024/050628 IL2024050628W WO2025004041A1 WO 2025004041 A1 WO2025004041 A1 WO 2025004041A1 IL 2024050628 W IL2024050628 W IL 2024050628W WO 2025004041 A1 WO2025004041 A1 WO 2025004041A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging device
image data
dynamic range
calibration
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IL2024/050628
Other languages
English (en)
Inventor
Noam WEISS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazor Robotics Ltd
Original Assignee
Mazor Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazor Robotics Ltd filed Critical Mazor Robotics Ltd
Publication of WO2025004041A1 publication Critical patent/WO2025004041A1/fr
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/58Testing, adjusting or calibrating thereof
    • A61B6/582Calibration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/58Testing, adjusting or calibrating thereof
    • A61B6/582Calibration
    • A61B6/585Calibration of detector units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data

Definitions

  • the present disclosure is generally directed to calibration, and relates more particularly to calibrating an imaging device.
  • Example aspects of the present disclosure include:
  • the output data is received from a dosimeter configured to output a radiation parameter, wherein the radiation parameter is used by the calibration model to determine the calibration values.
  • the radiation parameter comprises an air kerma value.
  • output data is determined from image data from at least one of the imaging device or another imaging device.
  • the image data comprises x-ray image data.
  • the image processing model configured to process the image data and output a radiation parameter, wherein the radiation parameter is used to generate the calibration model.
  • the radiation parameter comprises an air kerma value.
  • any of the aspects herein further comprising: receiving image data from the imaging device using the limited dynamic range.
  • the memory stores further data for processing by the processor that, when processed, causes the processor to: input the image data into an image processing model, the image processing model configured to process the image data and output a radiation parameter, wherein the radiation parameter is used by the calibration model to determine the calibration values.
  • the radiation parameter comprises an air kerma value.
  • a system for calibrating an imaging device comprises an imaging device having a first dynamic range and a second dynamic range, wherein the second dynamic range is lower than the first dynamic range; a processor; and a memory storing data for processing by the processor, the data, when processed, causing the processor to: receive output data; generate the calibration model using the output data; and apply the calibration model to the imaging device, the calibration model configured to extrapolate calibration values outside of the limited dynamic range.
  • the memory stores further data for processing by the processor that, when processed, causes the processor to: receive image data from the imaging device using the second dynamic range.
  • the memory stores further data for processing by the processor that, when processed, causes the processor to: scale the image data to Hounsfield (HU) values.
  • HU Hounsfield
  • each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
  • each one of A, B, and C in the above expressions refers to an element, such as X, Y, and Z, or class of elements, such as Xl-Xn, Yl- Ym, and Zl-Zo
  • the phrase is intended to refer to a single element selected from X, Y, and Z, a combination of elements selected from the same class (e.g., XI and X2) as well as a combination of elements selected from two or more classes (e.g., Y 1 and Zo).
  • FIG. 1 is a block diagram of a system according to at least one embodiment of the present disclosure
  • Fig. 2A is a flowchart according to at least one embodiment of the present disclosure
  • Fig. 2B is a flowchart according to at least one embodiment of the present disclosure
  • Fig. 3 is a flowchart according to at least one embodiment of the present disclosure
  • Fig. 4 is a flowchart according to at least one embodiment of the present disclosure.
  • Fig. 5 is a flowchart according to at least one embodiment of the present disclosure.
  • Fig. 6 is a flowchart according to at least one embodiment of the present disclosure.
  • the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions).
  • Computer- readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000- series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry
  • DSPs digital signal processors
  • proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
  • CT computerized tomography
  • HU Hounsfield unit
  • the scaling may be based on, for example, the Beer-Lambert law which correlates the attenuation of the photons’ energy to a property of the materials through which the X-ray photons is traveling.
  • physical information can be obtained from images that are scaled to HU values.
  • the information to perform the scaling of each image, and the resulting volume may be lacking due to the limited dynamic range of the C-arm or 0-arm imaging device or detector.
  • the scaling cannot be accurately performed on an image obtained from the C-arm or 0-arm imaging device or detector with a limited dynamic range as the dynamic range of the C-arm or 0-arm imaging device or detector is too low or too high and may result in undesirable saturation or starvation in the image when radiated with high or low enough X-ray dose.
  • a calibration such as an air-calibration, an HU-calibration, a CT calibration, or the like is needed to calibrate a non-CT imaging device such as the C-arm or 0-arm imaging device.
  • an imaging device (such as, for example, an 0-arm or C-arm detector) with a limited dynamic range can be calibrated based on output data from another imaging device such as, for example, another imaging device with a higher dynamic range or a lower dynamic range than the limited dynamic range, an imaging device with multiple modes where each mode operates with different dynamic range(s) (e.g., a dynamic detector), a reference detector (for example, a single-pixel image detector), and/or from a dosimeter.
  • the output data may include a radiation parameter (e.g., air kerma value), image data with unitless numbers, or one or more unitless numbers (obtained from, for example, a reference detector).
  • the output data may be mapped for a desired dynamic range outside of the limited dynamic range and the mapped output data may be used to generate a calibration model.
  • the calibration model is configured to extrapolate calibration values such as expected theoretical values of the imaging device beyond or outside of the limited dynamic range.
  • the calibration enables the imaging device with the limited dynamic range to generate images that can be scaled to HU values.
  • HU values give physical meaning to each voxel of the image and allow for consistent and repetitive measurements across scaled images. For example, different tissue characteristics, such as bone density of the difference between healthy and tumorous tissues can be measured in an image scaled to HU values.
  • images scaled to HU values from different imaging devices, imaging devices from different manufacturers, and images from different patients can be compared together, which beneficially enables analysis of scaled images regardless of the manufacturer of the imaging device, whether the same imaging device was used, and/or the patient. In other words, scaled images can be analyzed consistently.
  • Embodiments of the present disclosure provide technical solutions to one or more of the problems of (1) scaling images obtained from an imaging device with a limited dynamic range, (2) calibrating an imaging device with a limited dynamic range, and (3) enabling an imaging device with a limited dynamic range to accurately provide images that are scalable to HU values.
  • Fig. 1 a block diagram of a system 100 according to at least one embodiment of the present disclosure is shown.
  • the system 100 may be used to calibrate an imaging device such as the imaging device 112 and/or carry out one or more other aspects of one or more of the methods disclosed herein.
  • the system 100 comprises a computing device 102, one or more imaging devices 112, a measurement tool 116, a database 130, and/or a cloud or other network 134.
  • Systems according to other embodiments of the present disclosure may comprise more or fewer components than the system 100.
  • the system 100 may not include one or more components of the computing device 102, the measurement tool 116, the database 130, and/or the cloud 134.
  • the computing device 102 comprises a processor 104, a memory 106, a communication interface 108, and a user interface 110. Computing devices according to other embodiments of the present disclosure may comprise more or fewer components than the computing device 102.
  • the processor 104 of the computing device 102 may be any processor described herein or any similar processor.
  • the processor 104 may be configured to execute instructions stored in the memory 106, which instructions may cause the processor 104 to carry out one or more computing steps utilizing or based on data received from the imaging device 112, the measurement tool 116, the database 130, and/or the cloud 134.
  • the memory 106 may be or comprise RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer-readable data and/or instructions.
  • the memory 106 may store information or data useful for completing, for example, any step of the methods 300, 400, 500, and/or 600 described herein, or of any other methods.
  • the memory 106 may store, for example, instructions and/or machine learning models that support one or more functions of the imaging device 112.
  • the memory 106 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 104, enable a calibration model 120 and/or image processing 122.
  • the image processing 122 may enable the processor 104 to process image data 202 (shown in Fig. 2A) received from, for example, an imaging device such as the imaging device 112 for the purpose of obtaining output data 204 (shown in Figs. 2A-2B) from the image data 202.
  • the image processing 112 may also enable the processor 104 to scale the image data to HU values for the purpose of enabling the image data 202 to provide physical information about each voxel in the image data 202.
  • the calibration model 120 may be generated from the output data 204, as will be described in detail in Figs. 2A-6.
  • the output data 204 may be received from, for example, a measurement tool 116 or as output from the image processing 122.
  • the calibration model 120 (once generated), may enable the processor 104 to extrapolate calibration values 206 (shown in Figs. 2A-2B) beyond a limited dynamic range of an imaging device, as will also be described in detail in Figs. 2A-6.
  • Such content may, in some embodiments, be organized into one or more applications, modules, packages, layers, or engines.
  • the memory 106 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 104 to carry out the various method and features described herein.
  • various contents of memory 106 may be described as instructions, it should be appreciated that functionality described herein can be achieved through use of instructions, algorithms, and/or machine learning models.
  • the data, algorithms, and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging device 112, the measurement tool 116, the database 130, and/or the cloud 134.
  • the computing device 102 may also comprise a communication interface 108.
  • the communication interface 108 may be used for receiving image data or other information from an external source (such as the imaging device 112, the measurement tool 116, the database 130, the cloud 134, and/or any other system or component not part of the system 100), and/or for transmitting instructions, images, or other information to an external system or device (e.g., another computing device 102, the imaging device 112, the measurement tool 116, the database 130, the cloud 134, and/or any other system or component not part of the system 100).
  • an external source such as the imaging device 112, the measurement tool 116, the database 130, the cloud 134, and/or any other system or component not part of the system 100.
  • the communication interface 108 may comprise one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.11a/b/g/n, Bluetooth, NFC, ZigBee, and so forth).
  • the communication interface 108 may be useful for enabling the device 102 to communicate with one or more other processors 104 or computing devices 102, whether to reduce the time needed to accomplish a computing-intensive task or for any other reason.
  • the computing device 102 may also comprise one or more user interfaces 110.
  • the user interface 110 may be or comprise a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user.
  • the user interface 110 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100.
  • the user interface 110 may be useful to allow a surgeon or other user to modify instructions to be executed by the processor 104 according to one or more embodiments of the present disclosure, and/or to modify or adjust a setting of other information displayed on the user interface 110 or corresponding thereto.
  • the computing device 102 may utilize a user interface 110 that is housed separately from one or more remaining components of the computing device 102.
  • the user interface 110 may be located proximate one or more other components of the computing device 102, while in other embodiments, the user interface 110 may be located remotely from one or more other components of the computer device 102.
  • the imaging device 112 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.), other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.), and/or any object (including, for example, objects outside of the medical field such as luggage, baggage, etc.).
  • the imaging device 112 may also yield image data without any objects and/or anatomical features for use with, for example, a calibration (e.g., air calibration, HU-calibration, CT calibration, etc.) of an imaging device (whether of the imaging device 112 or any other imaging device).
  • the image data may be of air.
  • the image data may also include a calibration object which may be used to support calibration of the imaging device.
  • Image data refers to the data generated or captured by an imaging device 112, including in a machine-readable form, a graphical/visual form, and in any other form.
  • the image data may comprise data corresponding to an anatomical feature of a patient, or to a portion thereof or any object.
  • the image data may in other instances comprise data corresponding to air.
  • the imaging device 112 may be capable of taking a 2D image or a 3D volumetric data to yield the image data or volumetric data.
  • the image data or volumetric data may have N dimensions.
  • photon counting images may comprise images of 2D times N number of colors or volumes comprising 3D times N number of colors.
  • the imaging device 112 may be or comprise, for example, an ultrasound scanner (which may comprise, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an 0-arm, a C-arm, a G-arm, or any other device utilizing x-ray-based imaging (e.g., a fluoroscope, a CT scanner, or other x-ray machine, a detector), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may comprise, for example, a transmitter, a receiver, a processor, and one or more antennae), or any other imaging device 112 suitable for obtaining images of an anatomical feature of a patient.
  • x-ray-based imaging e.g., a fluoroscope, a CT scanner, or other x-ray machine, a detector
  • the imaging device 112 in some instances may comprise two or more detectors layered on top of each other and each detector may detect or read different energy levels (e.g., one detector may read high energy images, another detector may read medium energy images, another detector may read low energy images, etc.).
  • the imaging device 112 may be contained entirely within a single housing, or may comprise a transmitter/emitter and a receiver/detector that are in separate housings or are otherwise physically separated.
  • the imaging device 112 may be operable to generate a stream of image data.
  • the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images.
  • image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
  • the imaging device 112 comprises, for example, an x-ray imaging device having at least one dynamic range.
  • the imaging device 112 may operate at one dynamic range.
  • the imaging device 112 may comprise more than one imaging device 112 for the purpose of using one imaging device to calibrate the other imaging device.
  • a first imaging device 112A may have a first dynamic range and a second imaging device 112B may have a second dynamic range.
  • the second dynamic range may be lower than the first dynamic range, though it will be appreciated that the second dynamic range may be higher than the first dynamic range.
  • the imaging device 112 may comprise multiple dynamic ranges and may be capable of operating in a first, or high dynamic range and a second, or low dynamic range.
  • the imaging device 112 may be used to calibrate itself by obtaining output data using one of the dynamic ranges, generating the calibration model using the output data, and applying the calibration model to the imaging device 112 using another dynamic range.
  • the imaging device 112 When the imaging device 112 operates in a high dynamic range, the imaging device 112 has a lower sensitivity (to, for example, X-ray photons) and is less susceptible to reaching a saturation point. When the imaging device 112 operates in a low dynamic range, the imaging device 112 has a higher sensitivity (to, for example, X-ray photons) and is more susceptible to reaching a saturation point.
  • the imaging device 112 using a high dynamic range (or the first imaging device 112 A) may be used to obtain image data 202 from which the output data 206 can be determined from.
  • the output data 206 can be used to generate a calibration model 120 to extrapolate calibration values 206, which can be used to calibrate the imaging device 112 (or the second imaging device 112B) having a limited dynamic range.
  • the measurement tool 116 may be configured to output the output data 204 in instances where the output data 204 is not obtained from the imaging device 112 or the first imaging device 112A.
  • the measurement tool 116 may comprise, for example, a dosimeter.
  • the dosimeter may be configured to detect and/or measure ionizing radiation exposure and output the output data 204.
  • the dosimeter may be configured to output one or more measured radiation parameter(s).
  • the dosimeter may output the measured radiation parameter(s) from which the air kerma value can be derived from.
  • the measured radiation parameter(s) may include the air kerma value.
  • the measurement tool 116 may comprise a reference detector configured to output the output data 204.
  • the reference detector may comprise a single-pixel image detector that outputs a reference detector value that is a single unitless number. It will be appreciated that a plurality of reference detector values and/or measured radiation parameter(s) can be used to generate the calibration model 120 for calibrating the imaging device 112.
  • the database 130 may store information that correlates to calibration of the imaging device 112 such as, for example, radiation parameter(s), calibration values (s), and/or image(s) obtained from the imaging device 112 or the measurement tool 116.
  • the database 130 may be configured to provide any such information to the computing device 102 or to any other device of the system 100 or external to the system 100, whether directly or via the cloud 134.
  • the database 130 may be or comprise part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
  • a hospital image storage system such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
  • PACS picture archiving and communication system
  • HIS health information system
  • the cloud 134 may be or represent the Internet or any other wide area network.
  • the computing device 102 may be connected to the cloud 134 via the communication interface 108, using a wired connection, a wireless connection, or both.
  • the computing device 102 may communicate with the database 130 and/or an external device (e.g., a computing device) via the cloud 134.
  • the system 100 or similar systems may be used, for example, to carry out one or more aspects of any of the methods 300, 400, 500, and/or 600 described herein.
  • the system 100 or similar systems may also be used for other purposes.
  • FIG. 2A an example of a model architecture 200 that supports methods and systems (e.g., Artificial Intelligence (Al)-based methods and/or system) for calibrating an imaging device (such as, for example, the imaging device 112) is shown.
  • methods and systems e.g., Artificial Intelligence (Al)-based methods and/or system
  • an imaging device such as, for example, the imaging device 112
  • Image data 202 may be used by a processor such as the processor 104 as input for an image processing such as the image processing 122. It will be appreciated that in some instances, such as during calibration of an imaging device, the image data may not depict any object and may simply depict air.
  • the image processing 122 may output, generate, or determine the output data 204 that may comprise radiation parameters.
  • the radiation parameter may comprise, for example, the air kerma value that can be used to generate one or more calibration values.
  • the air kerma value may be a function of a source-detector distance (e.g., a distance between a source and a detector of, for example, an x-ray imaging device), a spectrum of the imaging device 112, a tube current, and/or a pulse width.
  • the image data 202 may be received from an imaging device such as the imaging device 112 and/or a first imaging device such as the first imaging device 112A.
  • the imaging device 112 and/or the first imaging device 112A may generate or obtain the image data 202 using a high dynamic range or a dynamic range outside of a limited dynamic range of an imaging device to be calibrated.
  • Multiple sets of image data 202 may be obtained to generate multiple sets of output data 204.
  • the image data 202 (and thus, the output data 204) can be obtained for any combination of specific spectrum, source-detector distance, pulse-width, and tube current.
  • the output data 204 from the image processing 122 may be used by the processor 104 to generate a calibration model such as the calibration model 120, which may be configured to extrapolate or output the calibration values 206.
  • the output data 204 may include, for example, the air kerma value, a maximal gray-level or bit value that the imaging device 112 can provide, and/or a maximal exposure value of the imaging device 112.
  • the output data 204 may be mapped for a desired dynamic range that is, for example, beyond or outside of the limited dynamic range of the imaging device to be calibrated. It will be appreciated that the mapped output data 204 can also be scaled and/or translated to match units of the imaging device to be calibrated.
  • a model can be applied to the mapped output data 204 to generate the calibration model 120. More specifically, for example, a linear model can be applied to the mapped output data 204, from which a slope and intercept of the linear model can be calculated. The slope and the intercept can be used to extrapolate the calibration values 206 for exposures beyond the limits of the dynamic range or ranges of the imaging device being calibrated. Such extrapolation is beneficial as without the extrapolation, the imaging device will simply use the saturation value (or maximal value the imaging device can output) for values beyond the limited dynamic range, which results in undesirable saturation (or starvation for values below the limited dynamic range) in the image data.
  • any model e.g., logarithmic, dual-linear, etc.
  • multiple sets of the output data 204 can be used to generate the calibration model 120 to improve an accuracy of the calibration model 120 and/or to update the calibration model 120.
  • output data 204 can be obtained for any combination of specific spectrum, source-detector distance, pulse- width, and tube current.
  • the calibration model 120 may also output multiple calibration values 206.
  • the calibration model 120 can be applied to an imaging device such as the imaging device 112 or the second imaging device 112B having a limited dynamic range to calibrate the imaging device 112, 112B.
  • the calibration model 120 can be used to calculate (by using extrapolation) expected theoretical values of exposure beyond the limited dynamic range.
  • image data obtained from the imaging device 112, 112B (and more particularly, image data representing voxels) can be scaled to a normalized physical quantity, and from that a reconstructed volume can be scaled to HU values.
  • a model architecture 210 that supports methods and systems (e.g., Artificial Intelligence (Al)-based methods and/or system) for calibrating an imaging device (such as, for example, the imaging device 112) is shown.
  • the model architecture 210 is generally the same as the model architecture 200, except that the model architecture 210 does not include image data or image processing. Rather, the output data 204 is obtained from, for example, a measurement tool 116 such as a dosimeter and/or a reference detector, which can directly provide the radiation parameter.
  • the radiation parameter can include the air kerma value or other parameters from which the air kerma value (or other physical quantities that can be used for normalization) can be derived from.
  • the output data 204 from measurement tool 116 may be used by the processor 104 to generate a calibration model such as the calibration model 120.
  • the calibration model 120 may be used to extrapolate the calibration values 206.
  • the output data 204 may include, for example, an air kerma value, a maximal gray-level or bit value, and/or a maximal exposure value.
  • the calibration model 120 can be applied to an imaging device such as the imaging device 112 or the second imaging device 112B having a limited dynamic range to calibrate the imaging device 112, 112B.
  • an imaging device such as the imaging device 112 or the second imaging device 112B having a limited dynamic range to calibrate the imaging device 112, 112B.
  • image data obtained from the imaging device 112, 112B (and more particularly, image data representing voxels) can be scaled to normalized physical quantity, and from that a reconstructed volume can be scaled to HU values.
  • Fig. 3 depicts a method 300 that may be used, for example, for generating a model is provided.
  • the method 300 comprises generating a model (step 304).
  • the model may be the calibration model 120 and/or the imaging processing 122.
  • a processor such as the processor 104 may generate the model.
  • the calibration model 120 may be generated to facilitate and enable, for example, calibration of the imaging device 112.
  • the image processing 112 may be generated to facilitate and enable, for example, processing of image data to obtain output data for the calibration model 120.
  • the method 300 also comprises training the model (step 308).
  • the model such as the calibration model 120 may be trained using historical data from a number of imaging devices such as the imaging devices 112, 112A and/or measurement tool(s) such as the measurement tool 116.
  • the historical data may be obtained from imaging devices similar to the imaging devices 112, 112A. In other embodiments, the historical data may be obtained from any imaging device.
  • the present disclosure encompasses embodiments of the method 300 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
  • Fig. 4 depicts a method 400 that may be used, for example, for calibrating an imaging device.
  • the method 400 comprises receiving image data from an imaging device having a first dynamic range (step 404).
  • the image data may not depict any objects and/or anatomical features.
  • the image data may be of air for use with, for example, calibrating an imaging device having a limited dynamic range.
  • the image data may be the same as or similar to the image data 202 and may be received or obtained from an imaging device having a first dynamic range or a high dynamic range such as the imaging device 112.
  • the imaging device may comprise an x-ray imaging device such as a detector having a first dynamic range or a high dynamic range.
  • the image data may be received via a user interface such as the user interface 110 and/or a communication interface such as the communication interface 108 of a computing device such as the computing device 102, and may be stored in a memory such as the memory 106 of the computing device.
  • the image data may also be received from an external database or image data repository (e.g., a hospital image data storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data data), and/or via the Internet or another network.
  • the image data may also be generated by and/or uploaded to any other component of a system such as the system 100.
  • the image data may be indirectly received via any other component of the system or a node of a network to which the system is connected.
  • the image data may be stored in a system (e.g., a system 100) and/or one or more components thereof (e.g., a database 130).
  • the stored image data may then be received (e.g., by a processor 104), as described above, preoperatively (e.g., before the surgery) and/or intraoperatively (e.g., during surgery).
  • the method 400 also comprises processing the image data using image processing to obtain output data (step 406).
  • the image data may be used by a processor such as the processor 104 as input for an image processing such as the image processing 122.
  • the image processing may output, generate, or determine output data such as the output data 204 that may comprise, for example, the air kerma value, a maximal gray-level or bit value that the imaging device can provide, and/or a maximal exposure value of the imaging device.
  • the output data can be used to generate a calibration model such as the calibration model 120.
  • the air kerma value may be a function of a source-detector distance, a spectrum of the imaging device 112, a tube current, and/or a pulse width.
  • steps 404 and 406 may be repeated to obtain multiple sets of output data and/or to update the output data.
  • multiple sets of image data may be obtained to generate multiple sets of output data.
  • the image data and thus, the output data
  • the image data can be obtained for any combination of specific spectrum, source-detector distance, pulse-width, and tube current.
  • the method 400 also comprises generating the calibration model using the output data (step 408).
  • the output data from the image processing performed in, for example, the step 406, may be used by the processor to generate the calibration model.
  • the calibration model may be configured to extrapolate or output calibration values such as the calibration values 206.
  • the output data may be mapped for a desired dynamic range that is, for example, beyond the limited dynamic range of the imaging device to be calibrated. It will be appreciated that the mapped output data can also be scaled and/or translated to match units of the imaging device to be calibrated.
  • a model can be applied to the mapped output data to generate the calibration model.
  • a linear model can be applied to mapped output data, from which a slope and intercept of the linear model can be calculated.
  • the slope and the intercept can be used to extrapolate calibration values for exposures beyond the limited dynamic range of the imaging device being calibrated.
  • Such extrapolation is beneficial as without the extrapolation, the imaging device will simply use the saturation value (or maximal value the imaging device can output) for values beyond the limited dynamic range, which results in undesirable saturation (or starvation for values below a limited range) in the image data.
  • any model e.g., logarithmic, dual-linear, etc.
  • the method 400 also comprises applying the calibration model to the imaging device having a second dynamic range (step 412).
  • the calibration model may be configured to generate calibration values beyond a limited dynamic range of an imaging device by extrapolating the calibration values from the calibration model.
  • the calibration values may be the expected theoretical value of the imaging device for outputs higher or lower than the limited dynamic range.
  • the imaging device having the second dynamic range or a limited dynamic range may be an imaging device different than the imaging device having the first dynamic range. In other words, two separate imaging devices are used to perform the steps 404 and 412.
  • the imaging device having the second dynamic range may be the same as the imaging device having the first dynamic range.
  • the imaging device is capable of operating in two or more dynamic ranges.
  • the imaging device may operate in the first dynamic range (e.g., a higher dynamic range) to obtain the output data.
  • the imaging device may then be calibrated using the calibration model and operated in the second dynamic range (e.g., a lower dynamic range).
  • the second dynamic range is lower than the first dynamic range.
  • the same imaging device may be used to perform the steps 404 and 412.
  • the method 400 also comprises receiving an image from the imaging device having the second dynamic range, (step 416).
  • the image comprising image data may be received from the imaging device of the step 412 described above.
  • the image data may be captured and may be stored in a system (e.g., a system 100) and/or one or more components thereof (e.g., a database 130).
  • the image data may be a 2D image or a 3D volumetric data or a set of 2Dimages.
  • the image data may not depict any object. In other words, the image data may only depict air.
  • the image data may depict a calibration object that may support or aid in calibration of the imaging device.
  • the image data may depict a patient’s anatomy or portion thereof and/or an object.
  • the image data may depict multiple anatomical elements associated with the patient anatomy, including incidental anatomical elements (e.g., ribs or other anatomical objects on which a surgery or surgical procedure will not be performed) in addition to target anatomical elements (e.g., vertebrae or other anatomical objects on which a surgery or surgical procedure is to be performed).
  • incidental anatomical elements e.g., ribs or other anatomical objects on which a surgery or surgical procedure will not be performed
  • target anatomical elements e.g., vertebrae or other anatomical objects on which a surgery or surgical procedure is to be performed.
  • the method 400 also comprises scaling the image or volumetric data (step 420).
  • the image data or volumetric data may be scaled by the processor using, for example, the image processing.
  • the image data or volumetric data may be scaled to the HU values to provide information (such as physical information) about each voxel in the image data or volumetric data.
  • information beneficially enables identification of various features in the image data such as, for example, soft tissue and/or hard tissue. For example, different tissue characteristics, such as bone density of the difference between healthy and tumorous tissues can be measured in image data scaled to HU values.
  • the present disclosure encompasses embodiments of the method 400 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
  • Fig. 5 depicts a method 500 that may be used, for example, for calibrating an imaging device.
  • the method 500 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
  • the at least one processor may be the same as or similar to the processor(s) 104 of the computing device 102 described above.
  • a processor other than any processor described herein may also be used to execute the method 500.
  • the at least one processor may perform the method 500 by executing elements stored in a memory such as the memory 106.
  • the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 500.
  • One or more portions of a method 500 may be performed by the processor executing any of the contents of memory, such as a calibration model 120 and/or an image processing 122.
  • the method 500 comprises receiving image data from a first imaging device having a first dynamic range (step 504).
  • the step 504 may be the same as or similar to the step 404 of the method described above except that the first imaging device may be the same as or similar to the first imaging device 112A having at least a first dynamic range or a high dynamic range.
  • the method 500 comprises processing the image data using image processing to obtain output data (step 506).
  • the step 506 may be the same as or similar to the step 406 of the method 400 described above.
  • the method 500 also comprises generating a calibration model using the output data (step 508).
  • the step 508 may be the same as or similar to the step 408 of the method 400 described above.
  • the method 500 also comprises applying the calibration model to a second imaging device having a second dynamic range (step 512).
  • the step 512 may be the same as or similar to the step 412 of the method described above except that the second imaging device may be the same as or similar to the second imaging device 112B having at least a second dynamic range or a second dynamic range where the second dynamic range is lower than the first dynamic range.
  • the first imaging device used in the step 504 may be a different from the imaging device used in the step 512.
  • the method 500 also comprises receiving image data from the second imaging device having the second dynamic range (step 516).
  • the step 516 may be the same as or similar to the step 416 of the method 400 described above.
  • the method 500 also comprises scaling the image data (step 520).
  • the step 520 may be the same as or similar to the step 420 of the method 400 described above.
  • the present disclosure encompasses embodiments of the method 500 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
  • Fig. 6 depicts a method 600 that may be used, for example, for calibrating an imaging device.
  • the method 600 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
  • the at least one processor may be the same as or similar to the processor(s) 104 of the computing device 102 described above.
  • a processor other than any processor described herein may also be used to execute the method 600.
  • the at least one processor may perform the method 600 by executing elements stored in a memory such as the memory 106.
  • the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 600.
  • One or more portions of a method 600 may be performed by the processor executing any of the contents of memory, such as a calibration model 120 and/or an image processing 122.
  • the method 600 comprises receiving output data from a measurement tool(step 604).
  • the measurement tool may be the same as or similar to the measurement tool 116 and may be configured to generate the output data.
  • the measurement tool is a dosimeter.
  • the measurement tool may comprise a reference detector configured to output the output data.
  • the reference detector may comprise a single-pixel image detector that outputs a reference detector value that is a single unitless number.
  • the output data may be received directly from a measurement tool.
  • the output data may comprise radiation parameter(s) and/or the air kerma value.
  • the dosimeter may provide radiation parameter(s) from which the air kerma value can be derived from.
  • the method 600 also comprises generating a calibration model using the output data (step 608).
  • the step 608 may be the same as or similar to the step 408 of the method 400 described above.
  • the method 600 also comprises applying the calibration model to an imaging device having a limited dynamic range (step 612).
  • the step 612 may be the same as or similar to the step 412 of the method 400 described above.
  • the method 600 also comprises receiving image data from the imaging device having the limited dynamic range (step 616).
  • the step 616 may be the same as or similar to the step 416 of the method 400 described above.
  • the method 600 also comprises scaling the image data (step 620).
  • the step 620 may be the same as or similar to the step 420 of the method 400 described above.
  • the present disclosure encompasses embodiments of the method 600 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
  • the present disclosure encompasses methods with fewer than all of the steps identified in Figs. 3, 4, 5, and 6 (and the corresponding description of the methods 300, 400, 500, and 600), as well as methods that include additional steps beyond those identified in Figs. 3, 4, 5, and 6 (and the corresponding description of the methods 300, 400, 500, and 600).
  • the present disclosure also encompasses methods that comprise one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or comprise a registration or any other correlation.
  • Example 3 The method of example 2, wherein the radiation parameter comprises an air kerma value.
  • Example 4 The method of example 1, wherein the output data is determined from image data from at least one of the imaging device or another imaging device.
  • Example 5 The method of example 4, wherein when the image data is received from the imaging device, the image data is received when the imaging device is using a high dynamic range.
  • Example 6 The method of example 4, wherein the image data is received from the another imaging device using a high dynamic range.
  • Example 7 The method of example 4, wherein the image data comprises x-ray image data.
  • Example 8 The method of example 4, further comprising inputting the image data into an image processing model, the image processing model configured to process the image data and output a radiation parameter, wherein the radiation parameter is used to generate the calibration model.
  • Example 9 The method of example 8, wherein the radiation parameter comprises an air kerma value.
  • Example 10 The method of example 1, further comprising receiving image data from the imaging device using the limited dynamic range.
  • Example 11 The method of example 10, further comprising scaling the image data to Hounsfield (HU) values.
  • Example 12 A system for calibrating an imaging device comprising: a first imaging device having a first dynamic range; a second imaging device having a second dynamic range, wherein the second dynamic range is lower than the first dynamic range; a processor; and a memory storing data for processing by the processor, the data, when processed, causing the processor to: receive output data from the first imaging device; generate a calibration model using the output data; and apply the calibration model to the second imaging device, the calibration model configured to extrapolate calibration values outside of the limited dynamic range.
  • Example 13 The system of example 12, wherein the output data is determined from image data obtained from the first imaging device.
  • Example 14 The system of example 13, wherein the image data is free of objects and anatomical elements.
  • Example 15 The system of example 13, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to input the image data into an image processing model, the image processing model configured to process the image data and output a radiation parameter, wherein the radiation parameter is used by the calibration model to determine the calibration values.
  • Example 16 The system of example 15, wherein the radiation parameter comprises an air kerma value.
  • Example 17 A system for calibrating an imaging device comprising: an imaging device having a first dynamic range and a second dynamic range, wherein the second dynamic range is lower than the first dynamic range; a processor; and a memory storing data for processing by the processor, the data, when processed, causing the processor to: receive output data; generate a calibration model using the output data; and apply the calibration model to the imaging device, the calibration model configured to extrapolate calibration values outside of the limited dynamic range.
  • Example 18 The system of example 17, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to receive image data from the imaging device using the second dynamic range.
  • Example 19 The system of example 18, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to scale the image data to Hounsfield (HU) values.
  • Example 20 The system of example 17, wherein the output data is determined from image data obtained from the imaging device using the first dynamic range.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne des systèmes et des procédés d'étalonnage d'un dispositif d'imagerie. Des données de sortie peuvent être reçues et un modèle d'étalonnage peut être généré à l'aide des données de sortie. Le modèle d'étalonnage peut être configuré pour extrapoler des valeurs d'étalonnage. Le modèle d'étalonnage peut être appliqué à un dispositif d'imagerie ayant une plage dynamique limitée. Le modèle d'étalonnage peut également être configuré pour extrapoler des valeurs d'étalonnage à l'extérieur de la plage dynamique limitée.
PCT/IL2024/050628 2023-06-29 2024-06-26 Systèmes et procédés d'étalonnage d'un dispositif d'imagerie Pending WO2025004041A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363524185P 2023-06-29 2023-06-29
US63/524,185 2023-06-29

Publications (1)

Publication Number Publication Date
WO2025004041A1 true WO2025004041A1 (fr) 2025-01-02

Family

ID=91966458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2024/050628 Pending WO2025004041A1 (fr) 2023-06-29 2024-06-26 Systèmes et procédés d'étalonnage d'un dispositif d'imagerie

Country Status (1)

Country Link
WO (1) WO2025004041A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050301A1 (en) * 2012-08-17 2014-02-20 General Electric Company System and method for correcting for image artifacts in x-ray image data
US20190216419A1 (en) * 2017-07-12 2019-07-18 Shanghai United Imaging Healthcare Co., Ltd. System and method for air correction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140050301A1 (en) * 2012-08-17 2014-02-20 General Electric Company System and method for correcting for image artifacts in x-ray image data
US20190216419A1 (en) * 2017-07-12 2019-07-18 Shanghai United Imaging Healthcare Co., Ltd. System and method for air correction

Similar Documents

Publication Publication Date Title
US11164346B2 (en) Posterior image sampling to detect errors in medical imaging
US12228635B2 (en) Motion tracking in magnetic resonance imaging using RADAR and a motion detection system
US20180253838A1 (en) Systems and methods for medical imaging of patients with medical implants for use in revision surgery planning
Gomes et al. Accuracy of ITK-SNAP software for 3D analysis of a non-regular topography structure
KR20170060698A (ko) 컴퓨터 단층 촬영장치 및 그 제어방법
US10610170B2 (en) Patient position monitoring system based on 3D surface acquisition technique
CN104011773A (zh) 序列图像采集方法
JP5495886B2 (ja) 患者位置決めシステム
US20180040121A1 (en) Method and system for automatic tube current modulation
CN107865658A (zh) 用于修正合成电子密度图的方法和设备
US11837352B2 (en) Body representations
WO2022256421A1 (fr) Système et procédé d'enregistrement simultané de multiples tomodensitogrammes pulmonaires pour une analyse pulmonaire quantitative
KR102185724B1 (ko) 대상체의 측정에 사용될 캘리퍼 타입에 따라 위치가 교정된 포인트를 의료 영상에서 표시하기 위한 방법 및 장치
WO2025004041A1 (fr) Systèmes et procédés d'étalonnage d'un dispositif d'imagerie
US12263026B2 (en) Systems, methods, and devices for multiple exposures imaging
US20200167977A1 (en) Tomographic image processing apparatus and method, and computer program product
CN114569146B (zh) 医学图像处理方法、装置、计算机设备及存储介质
EP4631436A2 (fr) Systèmes, procédés et dispositifs pour générer une image corrigée
CN118102968A (zh) 解剖成像用机器人放置电极的系统、装置和方法
CN118042988A (zh) 用于获取之前的成像检查的前瞻性质量评估
JP2022027113A (ja) 情報処理装置、プログラム、情報処理方法、及び医療システム
WO2025074367A1 (fr) Systèmes et procédés de vérification d'un ou de plusieurs modèles de segmentation
US20240382271A1 (en) Systems and methods for correlating one or more motions of an anatomical element
WO2025229497A1 (fr) Systèmes et procédés de génération d'une ou de plusieurs reconstructions
WO2025186761A1 (fr) Systèmes et procédés de détermination d'une position d'un objet par rapport à un dispositif d'imagerie

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24746444

Country of ref document: EP

Kind code of ref document: A1