WO2025214799A1 - Processing projection data - Google Patents
Processing projection dataInfo
- Publication number
- WO2025214799A1 WO2025214799A1 PCT/EP2025/058715 EP2025058715W WO2025214799A1 WO 2025214799 A1 WO2025214799 A1 WO 2025214799A1 EP 2025058715 W EP2025058715 W EP 2025058715W WO 2025214799 A1 WO2025214799 A1 WO 2025214799A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- projection data
- reconstructed
- interest
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/005—Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/412—Dynamic
Definitions
- the present invention relates to the field of Computed Tomography (CT) scanning and, in particular, to the processing of projection data produced by a CT scanning system.
- CT Computed Tomography
- a CT scan provides one approach for performing imaging of the subject.
- CT scanners are well-established medical imaging devices that use a detector to detect an interaction between X-ray radiation and irradiated material in order to generate medical imaging data.
- a CT scanner will produce projection data of an imaged area.
- the projection data is gradually acquired overtime, such that different parts of the projection data are captured or represent different points in time. If the imaged area includes, e.g., part of, an anatomical structure that is undergoing an anatomical cycle, e.g., the heart or lungs, then any images produced using such projection data may suffer from motion artifacts.
- a computer-implemented method for processing projection data of a subject comprises: obtaining two or more different parts of the projection data, generated by a CT scanning system during a single scanning procedure, each part at least partially representing a same region of interest of the subject; separately processing each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest; and controlling a user interface to provide a visual representation of each reconstructed CT image of the same region of interest.
- the present disclosure proposes to generate, from projection data produced in a single pass, multiple CT images of a same region of interest in a subject.
- the CT images of this same region of interest are displayed at a user interface. This allows an operator or clinician to compare the multiple CT images to establish or identify areas of the subject that moved during the capturing of the projection data, and which may therefore have motion artifacts in any images generated from the projection data. This provides the operator or clinician with useful information for understanding the data displayed or provided to them, particularly to identify potential errors and sources of error in the information.
- the at least one reconstructed CT image comprises a CT image depicting a predetermined view of the region of interest.
- a viewpoint of at least one reconstructed CT image may be the same, e.g., such that the relative position of the subject within each reconstructed CT image is the same and the relative view of the subject within each reconstructed CT image is the same (e.g., all CT images may provide a same sagittal or coronal view of the subject).
- controlling the user interface comprises controlling a portion of the user interface to sequentially provide a visual representation of each CT image depicting the predetermined view of the same region of interest.
- the CT images may be provided or displayed sequentially or consecutively, i.e., to provide a sequential display of the CT images.
- controlling the user interface comprises controlling a part of the user interface to iteratively and sequentially provide a visual representation of each CT image depicting the predetermined view of the same region of interest.
- the sequential display of the CT images may be looped.
- the CT image depicting a predetermined view of the region of interest is a two-dimensional CT image.
- each of the two or more different parts of the projection data represents a different temporal moment or period during the single scanning procedure.
- the computer-implemented method may further comprise jointly processing the two or more parts of the projection data to produce a combined reconstructed CT image of the same region of interest.
- the computer-implemented method may further comprise controlling a user interface to provide a visual representation of the combined reconstructed CT image, wherein at least one visual property of each pixel or voxel of the visual representation of the combined reconstructed CT image is responsive to the measure for the pixel or voxel of the combined reconstructed CT image.
- This approach provides an intuitive and easy to understand mechanism for identifying locations of potential error (e.g., motion-caused error) in the combined reconstructed CT image, to significantly reduces a risk that the error will be overlooked or ignored.
- potential error e.g., motion-caused error
- At least one visual property of each pixel or voxel comprises a color of each pixel or voxel.
- the computer-implemented method may further comprise performing motion compensation on the combined reconstructed CT image using the at least one reconstructed CT image of each part of the projection data.
- the single scanning procedure is a single circular CT scanning procedure or a single helical CT scanning procedure.
- a device configured to process projection data
- the device comprising: processing circuitry; and a memory containing instructions that, when executed by the processing circuitry, configure the processing circuitry to: obtain two or more different parts of the projection data, generated by a CT scanning system during a single scanning procedure, each part at least partially representing a same region of interest of the subject; separately process each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest; and control a user interface to provide a visual representation of each reconstructed CT image of the same region of interest.
- FIG. 1 illustrates a system in which embodiments may be employed
- Figure 2 illustrates a proposed method
- Figure 3 illustrates a detection system for generating projection data
- Figure 4 illustrates weighting functions for producing respective parts of the projection data.
- Figure 1 illustrates a system 100 in which embodiments may be employed, for improved contextual understanding.
- the system 100 comprises a CT scanning system 110 and a device 120.
- the CT scanning system 110 is configured to capture and/or generate (CT) projection data 150 of a subject 190 during a single scanning procedure.
- the projection data may comprise the raw data captured by the CT scanning system 110 during the single scanning procedure, before reconstruction has been performed.
- the projection data may undergo initial filtering and/or processing.
- the projection data is captured by sampling a set of one or more data projection elements using a detection system as the subject 190 and detection system are moved with respect to one another.
- Each set therefore represents data captured during a single, respective capture period (i.e., during one iteration of the sampling).
- the projection data comprises a sequence of sets of one or more data projection elements.
- the detection system typically comprises one or more clusters of one or more detectors, e.g., one or more rows of detectors.
- a cluster of detectors may comprise one or more rows of detectors.
- Each row may he generally perpendicular to an average direction of relative movement between the detection system and the subject, which is usually along an axis in which the subject lies (i.e., the vertical axis).
- each cluster may comprise a grouping of one or more detectors (e.g., defining a section such as a quarter/quadrant of the detection system).
- Each element of projection data may contain data captured by only a single cluster of the detectors during a single, respective capture period.
- each set of one or more projection data elements may correspondingly contain data captured by the detection system during a single, respective capture period. Nonetheless, it will be appreciated that the data contained in each projection data element may be captured at different points in time (e.g., if each detector is not simultaneously activated or used to sample).
- weightings e.g., a weighting function
- Known types of scanning procedures include circular CT scanning procedures and helical CT scanning procedures.
- the function and design of CT scanning systems for carrying out such CT scanning procedures are well known and established in the art.
- the device 120 comprises processing circuitry and a memory.
- the memory contains instructions that, when executed by the processing circuitry, configure the processing circuitry to perform one or more tasks or functions.
- the device 120 may, for instance, be replaced by any other form of processing system.
- the device 120 may be communicatively coupled to the CT scanning system 110 so as to receive at least projection data from the CT scanning system.
- the communicative coupling may be wired or wireless, and approaches are known in the art.
- the CT scanning system 110 may store the projection data in a memory or storage unit 130 (which may form part of the system 100).
- the device 120 may be communicatively coupled to the memory or storage unit, e.g., to receive or obtain projection from the memory storage unit 130.
- the communicative coupling may be wired or wireless, and approaches are known in the art.
- the processing circuitry may include, but is not limited to, one or more of the following: conventional microprocessors, application specific integrated circuits (ASICs), and/or field- programmable gate arrays (FPGAs).
- the memory may comprise any volatile and/or non-volatile computer memory such as RAM, PROM, EPROM, and EEPROM.
- the instructions contained in the memory may effectively define one or more programs that, when executed on the processing circuitry, cause the processing circuitry to perform encoded functions.
- the processing system 120 is often configured to obtain or receive the projection data 150, produced by the CT scanning system 110, and perform reconstruction on the projection data 150 to thereby produce CT imaging data.
- the processing system 120 may then control a user interface 140 (e.g., a screen) to provide a visual representation of the CT imaging data.
- a user interface 140 e.g., a screen
- the present disclosure proposes to exploit redundancy in projection data to aid in the identification of regions of motion within CT imaging data reconstructed from the projection data. For instance, if a helical CT scanning procedure is performed at a pitch of less than 2, then regions of the subject will be present in multiple different parts of the projection data.
- FIG 2 is a flowchart illustrating a proposed method 200, which may be performed or carried out by the processing system 120 illustrated in Figure 1.
- the method 200 comprises a step 210 of obtaining two or more different parts of the projection data, which has been generated by a/the CT scanning system during a single scanning procedure.
- each part of the projection data at least partially represents a same region of interest of the subject.
- each part of the projection data obtained in step 210 contains data that changes responsive to any changes in the same region of interest of the subject.
- the regions (or volume) of the subject represented by the obtained parts of the projection data overlap one another (at the region of interest).
- each part of the projection data may by processed, using a reconstruction technique, to produce a respective image of the same region of interest.
- each set of one or more projection data elements represents a region of the subject that at least partially overlaps a region represented by a neighboring set. The overlap thereby represents the region of interest.
- each part of the projection data may be formed from a respective set of one or more projection data elements, or from two or more sets. In this approach, each of the two or more different parts of the projection data represents a different temporal moment or period during the single scanning procedure.
- each set comprises two or more projection data elements, wherein each projection data element in a same set represents a region of the subject that at least partially overlaps a region represented by a neighboring projection data element. The overlap between the projection data elements thereby represents the region of interest.
- each part of the projection data may be formed from a respective projection data element in a same set of two or more projection data elements or a combination of two or more projection data elements in the same set of two or more projection data elements.
- Figure 3 conceptually illustrates one example approach for defining the parts of the projection data from a set of one or more projection data elements in the second scenario.
- the set of two or more projection data elements is produced, during a single sampling period, by a detection system 300.
- the detection system 300 comprises a plurality of rows 301, 302 of detectors.
- Each projection data element is produced by a respective one of the row of detectors.
- Each part of the projection data comprises projection data elements from a different cluster 310, 320, 330 of the row of detectors, which grouping may overlap.
- each part of the projection data is produced by a different cluster of 310, 320, 330 of detectors, each cluster comprising one or more rows of detectors (here: multiple rows of detectors).
- a weighting function or set of weighting values may be applied to each part of the projection data.
- a different weighting function can be applied to the set of one or more projection data elements.
- Each weighting function may effectively apply a different weight to (e.g. each value in) different projection data elements, e.g., apply a respective weight to the data produced by different rows of detectors. This effectively provides a mechanism for selecting the contribution of different rows of detectors to a respective part of the projection data.
- Figure 4 conceptually illustrates this approach.
- Figure 4 illustrates, for three different weighting functions 410, 420, 430, a weight W applied to each projection data element C (where each projection data element is produced by a different cluster of projection data elements).
- Each weighting function defines a weight for each projection data element of the set of projection data elements.
- Each weighting function is here designed to partially overlap at least one other weighting function.
- a weighting function that partially overlaps another weighting function is configured to apply a non-zero weight to at least one of the same projection data elements as another weighting function.
- applying a weight to a projection data element may comprise multiplying (e.g., each value in) the projection data element by the weight.
- the weight may be a numeric value.
- each part of the projection data may, for instance, be formed from a respective projection data element in two or more sets (of two or more projection data elements), in which each of the two or more sets at least partially represent a same region of the subject.
- each part of the projection data for instance, be formed from (the combination of) two or more projection data elements from two or more sets (of two or more projection data elements), in which each of the two or more sets at least partially represent a same region of the subject.
- the method 200 also comprises a step 220 of separately processing each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest. In this way, multiple reconstructed CT images of the same region of interest are produced in step 220. Thus, redundant information in the projection data is used to produce multiple reconstructed CT images of the same region of interest.
- Any appropriate technique for performing CT image reconstruction may be performed on each part of the projection data, which techniques are well known and established in the art.
- the reconstructed CT images may be 2D CT images or 3D CT images.
- the reconstructed CT images themselves provide an indication of the occurrence of motion in the region of interest during the scanning procedure.
- differences between the representation of the region of interest within the reconstructed can be attributed to motion artifacts.
- the reconstructed CT images can therefore be exploited to identify motion(s) in the region of interest, e.g., for automated correction or attenuation of motion artifacts or to highlight areas of motion to an observer (e.g., of a CT image produced from the projection data).
- the method 200 further comprises a step 230 of controlling a user interface to provide a visual representation of each reconstructed CT image of the same region of interest.
- step 230 comprises displaying, at a user interface, a visual representation of each reconstructed CT image.
- Approaches for providing a visual representation of an image are well known in the art.
- step 230 comprises controlling a portion of the user interface to (preferably iteratively and) sequentially provide a visual representation of each CT image of the same region of interest. In this way, a loop or dynamic display of the reconstructed CT images can be displayed.
- Step 230 is particularly advantageous when each reconstructed CT image depicts a (same) predetermined view of the region of interest.
- this allows for deviations or changes of the region of interest for different parts of the projection data to be readily and immediately identified. More specifically, this allows for regions of motion within the region of interest portrayed in the reconstructed CT images to be identifiable.
- Displaying the visual representation of the CT images thereby facilitates a clinician to identify the location of any motions within any CT image of the region of interest produced using the projection data. This can be exploited to aid the clinician to more readily identify less trustworthy regions or unwanted motions.
- FIG. 2 illustrates some additional optional steps that may be performed by the method 200, which are hereafter described.
- the method 200 may further comprise a step 240 of jointly processing the two or more parts of the projection data to produce a combined reconstructed CT image of the same region of interest.
- step 240 may, for instance, comprise inputting the two or more parts of the projection data into a reconstruction algorithm to produce the reconstructed combined CT image directly from the projection data.
- Step 240 may be performed as an overall reconstruction process for producing a reconstructed CT image using the entire projection data.
- the combined reconstructed CT image may be a 2D or a 3D CT image.
- the method 200 further comprises a step 242 of performing motion compensation on the combined reconstructed CT image using the at least one reconstructed CT image of each part of the projection data.
- Approaches for performing motion compensation using reconstructed images of a same region of interest are known in the art, such as the techniques put forward by Van Stevendaal, U., et al. "A motion-compensated scheme for helical cone -beam reconstruction in cardiac CT angiography.” Medical physics 35.7Partl (2008): 3239-3251 or Schafer, Dirk, et al. "Motion-compensated and gated cone beam filtered back-projection for 3-D rotational X- ray angiography.” IEEE transactions on medical imaging 25.7 (2006): 898-906. Generally, such techniques make use of motion field or motion field vectors between pairs of reconstructed CT images.
- An alternative approach to performing step 240 is to combine the reconstructed CT images (produced in step 220). This combination may, for instance, comprise summing or averaging the reconstructed CT images, in order to smooth noise.
- the combining of the reconstructed CT images may comprise performing motion compensation on one of the reconstructed CT images using the other reconstructed CT image(s).
- Approaches for performing motion compensation using reconstructed images of a same region of interest have been previously described.
- the method 200 may comprise a step 270 of identifying the reconstructed CT image associated with the least motion. This can be performed in an automated approach, for instance, by identifying the reconstructed CT image for which an average of a measure of motion to any other reconstructed CT image is the lowest.
- Approaches for measuring or quantifying an amount of motion are well known in the art.
- An alternative approach to performing step 270 is to receive a user input, e.g., from an input user interface of the system.
- the user input may identify the reconstructed CT image associated with the least motion, e.g., as determined or assessed by the clinician or operator.
- step 240 may comprise using the identified reconstructed CT image as the reconstructed CT image on which the motion compensation is performed, e.g., in an example approach to performing step 240.
- one or more of the reconstructed CT images may be omitted from the combination, e.g., responsive to a user indication or flag.
- This approach allows, for instance, for certain reconstructed CT images to be left out of the combination if unwanted motion during the scanning procedure occurs that cannot be acceptably (to the user) compensated for by motion compensation.
- the method 200 may further comprise a step 245 of displaying the combined reconstructed CT image of the region of interest.
- step 245 may further comprise controlling a portion of the user interface to provide a visual representation of the combined CT image depicting (e.g., the predetermined view of) the region of interest.
- the combined reconstructed CT image will have significantly better signal-to-noise than any of the reconstructed CT images (produced in step 220).
- This approach of displaying the combined reconstructed CT image is particularly advantageous when the step 230 is performed, e.g., with the reconstructed CT images being iteratively and sequentially displayed, as it facilitates immediate identification (by a clinician) of movement within the region of interest, and therefore areas of the combined reconstructed CT image that are less trustworthy or accurate.
- the method 200 may further comprise a step 250, for each pixel or voxel of the combined reconstructed CT image, processing the positionally corresponding pixel or voxel in each reconstructed CT image to produce a measure responsive to the differences between the values of the pixel of the combined reconstructed CT image and the positionally corresponding pixel or voxel of each reconstructed CT image.
- a positionally corresponding pixel or voxel is a pixel or voxel that represents a same portion of the subject as the pixel/voxel of the combined reconstructed CT image. This correspondence can be readily performed or determined if the combined reconstructed CT image provides a same view of the region of interest as each reconstructed CT image - or the combined reconstructed CT image is spatially registered (using known registration techniques) to each reconstructed CT image.
- step 250 may comprise of, for each pixel/voxel that represents a part of the region of interest in the combined reconstructed CT image, quantifying a respective difference between the value of the pixel/value and the value of the corresponding pixel/voxel that represents the same part of the region of interest in each reconstructed CT image (produced in step 220). For each pixel/voxel, the quantified respective differences are then processed or combined (e.g., summed, averaged or multiplied) to produce the measure for the pixel/voxel of the reconstructed CT image.
- the measure for each pixel/voxel (produced from these differences) can effectively represent a measure of confidence in the value of the pixel/voxel of the combined reconstructed CT image.
- the greater the combined difference the less confident the value of the combined reconstructed CT image (e.g., as there may be significant motion artefacts at the location represented by the pixel/voxel).
- the calculated differences may be used to modify a visual representation of the combined reconstructed CT image, e.g., if provided in step 245.
- at least one visual property of each pixel or voxel of the visual representation of the combined reconstructed CT image may be responsive to the measure for the pixel or voxel of the combined reconstructed CT image.
- the method 200 may further comprise a step 280 of performing one or more measurements using the identified reconstructed CT image.
- the measurements may, for instance, be anatomical measurements of one or more anatomical elements or features represented in the identified reconstructed CT image.
- step 280 may comprise performing one or more measurements using the combined reconstructed CT image.
- step 280 may further comprise displaying the one or more measurements. Put another way, step 280 may further comprise controlling a portion of the user interface to provide a visual representation of the one or more measurements.
- each step of the flow chart may represent a different action performed by processing circuitry of a device, e.g., carrying out instructions stored by a memory of the device.
- Embodiments may therefore make use of processing circuitry.
- a processor is one example of processing circuitry which employs one or more microprocessors that may be programmed using software (e.g., microcode), e.g., stored by the memory of the device, to perform the required functions.
- Processing circuitry may however be implemented with or without employing a processor, and also may be implemented as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions.
- processing circuitry components that may be employed in various embodiments of the present disclosure include, but are not limited to, conventional microprocessors, application specific integrated circuits (ASICs), and field-programmable gate arrays (FPGAs).
- ASICs application specific integrated circuits
- FPGAs field-programmable gate arrays
- Processing circuitry is associated with memory, e.g., one or more storage media such as volatile and non-volatile computer memory such as RAM, PROM, EPROM, and EEPROM.
- the memory is encoded with one or more programs that, when executed on the processing circuitry, perform the required functions.
- One or more various memories may be fixed within the processing circuitry or may be transportable, such that the one or more programs stored thereon can be loaded into the processing circuitry.
- non-transitory storage medium that stores or carries a computer program or computer code that, when executed by a processing system or processing circuitry, causes the processing system or processing circuitry to carry out any herein described method.
- a single processor or other unit may fulfill the functions of several items recited in the claims. If a computer program is discussed above, it may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
- a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
A computer-implemented method is disclosed for processing projection data of a subject in Computed Tomography (CT) imaging. According to the method, two or more different parts of the projection data are obtained, generated by a CT scanning system during a single scanning procedure, each part at least partially representing a same region of interest of the subject. Each part of the projection data is separately processed using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest. A user interface is controlled to provide a visual representation of each reconstructed CT image of the same region of interest.
Description
PROCESSING PROJECTION DATA
FIELD OF THE INVENTION
The present invention relates to the field of Computed Tomography (CT) scanning and, in particular, to the processing of projection data produced by a CT scanning system.
BACKGROUND OF THE INVENTION
There is an ongoing interest in the performance of non-invasive imaging of a subject or patient. In modem medicine, images produced using such techniques are important for aiding in the performance of diagnosis and analysis of a condition of the subject or patient. A CT scan provides one approach for performing imaging of the subject.
In particular, CT scanners are well-established medical imaging devices that use a detector to detect an interaction between X-ray radiation and irradiated material in order to generate medical imaging data. Typically, a CT scanner will produce projection data of an imaged area. The projection data is gradually acquired overtime, such that different parts of the projection data are captured or represent different points in time. If the imaged area includes, e.g., part of, an anatomical structure that is undergoing an anatomical cycle, e.g., the heart or lungs, then any images produced using such projection data may suffer from motion artifacts.
There is an ongoing desire to facilitate identification of motion and/or provide additional information for a subject or patient. This would aid in improved clinical analysis of the subject or patient.
SUMMARY OF THE INVENTION
According to examples in accordance with an aspect of the invention, there is provided a computer-implemented method for processing projection data of a subject. The computer- implemented method comprises: obtaining two or more different parts of the projection data, generated by a CT scanning system during a single scanning procedure, each part at least partially representing a same region of interest of the subject; separately processing each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest; and controlling a user interface to provide a visual representation of each reconstructed CT image of the same region of interest.
The present disclosure proposes to generate, from projection data produced in a single pass, multiple CT images of a same region of interest in a subject. The CT images of this same region of interest are displayed at a user interface. This allows an operator or clinician to compare the multiple CT images to establish or identify areas of the subject that moved during the capturing of the projection data, and which may therefore have motion artifacts in any images generated from the projection data.
This provides the operator or clinician with useful information for understanding the data displayed or provided to them, particularly to identify potential errors and sources of error in the information.
In some examples, for each part of the projection data, the at least one reconstructed CT image comprises a CT image depicting a predetermined view of the region of interest. In other words, a viewpoint of at least one reconstructed CT image (for each part of projection data) may be the same, e.g., such that the relative position of the subject within each reconstructed CT image is the same and the relative view of the subject within each reconstructed CT image is the same (e.g., all CT images may provide a same sagittal or coronal view of the subject).
In some examples, controlling the user interface comprises controlling a portion of the user interface to sequentially provide a visual representation of each CT image depicting the predetermined view of the same region of interest. In other words, the CT images may be provided or displayed sequentially or consecutively, i.e., to provide a sequential display of the CT images.
In at least one embodiment, controlling the user interface comprises controlling a part of the user interface to iteratively and sequentially provide a visual representation of each CT image depicting the predetermined view of the same region of interest. In other words, the sequential display of the CT images may be looped.
In some examples, for each part of the projection data, the CT image depicting a predetermined view of the region of interest is a two-dimensional CT image.
In some examples, each of the two or more different parts of the projection data represents a different temporal moment or period during the single scanning procedure.
The computer-implemented method may further comprise jointly processing the two or more parts of the projection data to produce a combined reconstructed CT image of the same region of interest.
In some examples, for each pixel or voxel of the combined reconstructed CT image, processing the positionally corresponding pixel or voxel in each reconstructed CT image to produce a measure responsive to a difference between the value of the pixel of the combined reconstructed CT image and the value of the positionally corresponding pixel or voxel of each reconstructed CT image.
The computer-implemented method may further comprise controlling a user interface to provide a visual representation of the combined reconstructed CT image, wherein at least one visual property of each pixel or voxel of the visual representation of the combined reconstructed CT image is responsive to the measure for the pixel or voxel of the combined reconstructed CT image.
This approach provides an intuitive and easy to understand mechanism for identifying locations of potential error (e.g., motion-caused error) in the combined reconstructed CT image, to significantly reduces a risk that the error will be overlooked or ignored.
In at least one example, at least one visual property of each pixel or voxel comprises a color of each pixel or voxel.
The computer-implemented method may further comprise performing motion compensation on the combined reconstructed CT image using the at least one reconstructed CT image of each part of the projection data.
In some examples, the single scanning procedure is a single circular CT scanning procedure or a single helical CT scanning procedure.
There is also provided computer program product comprising computer program code means which, when executed on a computing device having a processing system, cause the processing system to perform all of the steps of any herein disclosed method.
There is also provided a device configured to process projection data, the device comprising: processing circuitry; and a memory containing instructions that, when executed by the processing circuitry, configure the processing circuitry to: obtain two or more different parts of the projection data, generated by a CT scanning system during a single scanning procedure, each part at least partially representing a same region of interest of the subject; separately process each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest; and control a user interface to provide a visual representation of each reconstructed CT image of the same region of interest.
There is also provided a system comprising the above -de scribed device and the user interface.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the invention, and to show more clearly how it may be carried into effect, reference will now be made, by way of example only, to the accompanying drawings, in which:
Figure 1 illustrates a system in which embodiments may be employed;
Figure 2 illustrates a proposed method;
Figure 3 illustrates a detection system for generating projection data; and
Figure 4 illustrates weighting functions for producing respective parts of the projection data.
DETAILED DESCRIPTION OF THE EMBODIMENTS
It should be understood that the detailed description and specific examples, while indicating exemplary embodiments of the apparatus, systems and methods, are intended for purposes of illustration only and are not intended to limit the scope of the invention. These and other features, aspects, and advantages of the apparatus, systems and methods of the present invention will become better understood from the following description, appended claims, and accompanying drawings. It should be understood that the Figures are merely schematic and are not drawn to scale. It should also be understood that the same reference numerals are used throughout the Figures to indicate the same or similar parts.
The invention provides a mechanism for aiding the identification of motion within a subject. Proj eciton data is split into different parts, each of which represents a same region of interest. The parts are separately processed to produce a respective reconstructed CT image. The reconstructed CT images are displayed to facilitate identification of motion within the subject.
Figure 1 illustrates a system 100 in which embodiments may be employed, for improved contextual understanding. The system 100 comprises a CT scanning system 110 and a device 120.
The CT scanning system 110 is configured to capture and/or generate (CT) projection data 150 of a subject 190 during a single scanning procedure. In this context, the projection data may comprise the raw data captured by the CT scanning system 110 during the single scanning procedure, before reconstruction has been performed. The projection data may undergo initial filtering and/or processing.
More particularly, the projection data is captured by sampling a set of one or more data projection elements using a detection system as the subject 190 and detection system are moved with respect to one another. Each set therefore represents data captured during a single, respective capture period (i.e., during one iteration of the sampling). In this way, the projection data comprises a sequence of sets of one or more data projection elements.
The detection system typically comprises one or more clusters of one or more detectors, e.g., one or more rows of detectors. Thus, a cluster of detectors may comprise one or more rows of detectors. Each row may he generally perpendicular to an average direction of relative movement between the detection system and the subject, which is usually along an axis in which the subject lies (i.e., the vertical axis). As an alternative to a row, each cluster may comprise a grouping of one or more detectors (e.g., defining a section such as a quarter/quadrant of the detection system). Each element of projection data may contain data captured by only a single cluster of the detectors during a single, respective capture period. Therefore, each set of one or more projection data elements may correspondingly contain data captured by the detection system during a single, respective capture period. Nonetheless, it will be appreciated that the data contained in each projection data element may be captured at different points in time (e.g., if each detector is not simultaneously activated or used to sample).
In producing a projection data element, it is known to apply one or more weightings (e.g., a weighting function) to different values in the projection data element.
Known types of scanning procedures include circular CT scanning procedures and helical CT scanning procedures. The function and design of CT scanning systems for carrying out such CT scanning procedures are well known and established in the art.
The device 120 comprises processing circuitry and a memory. The memory contains instructions that, when executed by the processing circuitry, configure the processing circuitry to perform one or more tasks or functions. The device 120 may, for instance, be replaced by any other form of processing system.
The device 120 may be communicatively coupled to the CT scanning system 110 so as to receive at least projection data from the CT scanning system. The communicative coupling may be wired or wireless, and approaches are known in the art.
In other approaches, the CT scanning system 110 may store the projection data in a memory or storage unit 130 (which may form part of the system 100). The device 120 may be communicatively coupled to the memory or storage unit, e.g., to receive or obtain projection from the memory storage unit 130. The communicative coupling may be wired or wireless, and approaches are known in the art.
The processing circuitry may include, but is not limited to, one or more of the following: conventional microprocessors, application specific integrated circuits (ASICs), and/or field- programmable gate arrays (FPGAs). The memory may comprise any volatile and/or non-volatile computer memory such as RAM, PROM, EPROM, and EEPROM. The instructions contained in the memory may effectively define one or more programs that, when executed on the processing circuitry, cause the processing circuitry to perform encoded functions.
As a typical use-case scenario, the processing system 120 is often configured to obtain or receive the projection data 150, produced by the CT scanning system 110, and perform reconstruction on the projection data 150 to thereby produce CT imaging data. The processing system 120 may then control a user interface 140 (e.g., a screen) to provide a visual representation of the CT imaging data. Approaches for controlling a user interface in this way are well established in the art.
The present disclosure proposes to exploit redundancy in projection data to aid in the identification of regions of motion within CT imaging data reconstructed from the projection data. For instance, if a helical CT scanning procedure is performed at a pitch of less than 2, then regions of the subject will be present in multiple different parts of the projection data.
Figure 2 is a flowchart illustrating a proposed method 200, which may be performed or carried out by the processing system 120 illustrated in Figure 1.
The method 200 comprises a step 210 of obtaining two or more different parts of the projection data, which has been generated by a/the CT scanning system during a single scanning procedure.
Each part of the projection data at least partially represents a same region of interest of the subject. In other words, each part of the projection data obtained in step 210 contains data that changes responsive to any changes in the same region of interest of the subject. Put another way, the regions (or volume) of the subject represented by the obtained parts of the projection data overlap one another (at the region of interest).
More particularly, each part of the projection data may by processed, using a reconstruction technique, to produce a respective image of the same region of interest.
In a first scenario, each set of one or more projection data elements (of the projection data) represents a region of the subject that at least partially overlaps a region represented by a neighboring
set. The overlap thereby represents the region of interest. In this first scenario, each part of the projection data may be formed from a respective set of one or more projection data elements, or from two or more sets. In this approach, each of the two or more different parts of the projection data represents a different temporal moment or period during the single scanning procedure.
In a second scenario, each set comprises two or more projection data elements, wherein each projection data element in a same set represents a region of the subject that at least partially overlaps a region represented by a neighboring projection data element. The overlap between the projection data elements thereby represents the region of interest. In this second scenario, each part of the projection data may be formed from a respective projection data element in a same set of two or more projection data elements or a combination of two or more projection data elements in the same set of two or more projection data elements.
Figure 3 conceptually illustrates one example approach for defining the parts of the projection data from a set of one or more projection data elements in the second scenario.
In this example, the set of two or more projection data elements is produced, during a single sampling period, by a detection system 300. The detection system 300 comprises a plurality of rows 301, 302 of detectors. Each projection data element is produced by a respective one of the row of detectors. Each part of the projection data comprises projection data elements from a different cluster 310, 320, 330 of the row of detectors, which grouping may overlap.
Thus, in this example, each part of the projection data is produced by a different cluster of 310, 320, 330 of detectors, each cluster comprising one or more rows of detectors (here: multiple rows of detectors).
In some variants, a weighting function or set of weighting values may be applied to each part of the projection data.
In some examples, to generate each part of the projection data, a different weighting function can be applied to the set of one or more projection data elements. Each weighting function may effectively apply a different weight to (e.g. each value in) different projection data elements, e.g., apply a respective weight to the data produced by different rows of detectors. This effectively provides a mechanism for selecting the contribution of different rows of detectors to a respective part of the projection data.
Figure 4 conceptually illustrates this approach. In particular, Figure 4 illustrates, for three different weighting functions 410, 420, 430, a weight W applied to each projection data element C (where each projection data element is produced by a different cluster of projection data elements). Each weighting function defines a weight for each projection data element of the set of projection data elements.
Processing the set of projection data elements using the three different weighting functions 410, 420, 430 produces three different parts of the projection data. Each weighting function is here designed to partially overlap at least one other weighting function. In this context, a weighting
function that partially overlaps another weighting function is configured to apply a non-zero weight to at least one of the same projection data elements as another weighting function.
In the context of the present disclosure, applying a weight to a projection data element may comprise multiplying (e.g., each value in) the projection data element by the weight. Thus, the weight may be a numeric value.
Turning back to Figure 2, in circumstances where both the first and second scenarios are true, then each part of the projection data may, for instance, be formed from a respective projection data element in two or more sets (of two or more projection data elements), in which each of the two or more sets at least partially represent a same region of the subject.
Moreover, in circumstances where both the first and second scenarios are true, then each part of the projection data, for instance, be formed from (the combination of) two or more projection data elements from two or more sets (of two or more projection data elements), in which each of the two or more sets at least partially represent a same region of the subject.
A wide variety of other techniques and approaches for defining parts of the projection data that at least partially represent a same region of interest of the subject will be readily apparent to the skilled person.
The method 200 also comprises a step 220 of separately processing each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest. In this way, multiple reconstructed CT images of the same region of interest are produced in step 220. Thus, redundant information in the projection data is used to produce multiple reconstructed CT images of the same region of interest.
Any appropriate technique for performing CT image reconstruction may be performed on each part of the projection data, which techniques are well known and established in the art.
The reconstructed CT images may be 2D CT images or 3D CT images.
The reconstructed CT images themselves provide an indication of the occurrence of motion in the region of interest during the scanning procedure. In particular, differences between the representation of the region of interest within the reconstructed can be attributed to motion artifacts.
The reconstructed CT images can therefore be exploited to identify motion(s) in the region of interest, e.g., for automated correction or attenuation of motion artifacts or to highlight areas of motion to an observer (e.g., of a CT image produced from the projection data).
The method 200 further comprises a step 230 of controlling a user interface to provide a visual representation of each reconstructed CT image of the same region of interest. Thus, step 230 comprises displaying, at a user interface, a visual representation of each reconstructed CT image. Approaches for providing a visual representation of an image are well known in the art.
In preferred examples, step 230 comprises controlling a portion of the user interface to (preferably iteratively and) sequentially provide a visual representation of each CT image of the same
region of interest. In this way, a loop or dynamic display of the reconstructed CT images can be displayed.
Step 230 is particularly advantageous when each reconstructed CT image depicts a (same) predetermined view of the region of interest. In particular, this allows for deviations or changes of the region of interest for different parts of the projection data to be readily and immediately identified. More specifically, this allows for regions of motion within the region of interest portrayed in the reconstructed CT images to be identifiable.
Displaying the visual representation of the CT images thereby facilitates a clinician to identify the location of any motions within any CT image of the region of interest produced using the projection data. This can be exploited to aid the clinician to more readily identify less trustworthy regions or unwanted motions.
Figure 2 illustrates some additional optional steps that may be performed by the method 200, which are hereafter described.
The method 200 may further comprise a step 240 of jointly processing the two or more parts of the projection data to produce a combined reconstructed CT image of the same region of interest.
In some examples, step 240 may, for instance, comprise inputting the two or more parts of the projection data into a reconstruction algorithm to produce the reconstructed combined CT image directly from the projection data. Step 240 may be performed as an overall reconstruction process for producing a reconstructed CT image using the entire projection data. The combined reconstructed CT image may be a 2D or a 3D CT image.
In some examples, the method 200 further comprises a step 242 of performing motion compensation on the combined reconstructed CT image using the at least one reconstructed CT image of each part of the projection data. Approaches for performing motion compensation using reconstructed images of a same region of interest are known in the art, such as the techniques put forward by Van Stevendaal, U., et al. "A motion-compensated scheme for helical cone -beam reconstruction in cardiac CT angiography." Medical physics 35.7Partl (2008): 3239-3251 or Schafer, Dirk, et al. "Motion-compensated and gated cone beam filtered back-projection for 3-D rotational X- ray angiography." IEEE transactions on medical imaging 25.7 (2006): 898-906. Generally, such techniques make use of motion field or motion field vectors between pairs of reconstructed CT images.
An alternative approach to performing step 240 is to combine the reconstructed CT images (produced in step 220). This combination may, for instance, comprise summing or averaging the reconstructed CT images, in order to smooth noise.
In some examples, during this process, the combining of the reconstructed CT images may comprise performing motion compensation on one of the reconstructed CT images using the other reconstructed CT image(s). Approaches for performing motion compensation using reconstructed images of a same region of interest have been previously described.
In further examples, the method 200 may comprise a step 270 of identifying the reconstructed CT image associated with the least motion. This can be performed in an automated approach, for instance, by identifying the reconstructed CT image for which an average of a measure of motion to any other reconstructed CT image is the lowest. Approaches for measuring or quantifying an amount of motion are well known in the art.
An alternative approach to performing step 270 is to receive a user input, e.g., from an input user interface of the system. The user input may identify the reconstructed CT image associated with the least motion, e.g., as determined or assessed by the clinician or operator.
In some examples, if step 270 is performed, then step 240 may comprise using the identified reconstructed CT image as the reconstructed CT image on which the motion compensation is performed, e.g., in an example approach to performing step 240.
In some examples, if the combined reconstructed CT image is produced by combining the reconstructed CT images (produced in step 220), then one or more of the reconstructed CT images may be omitted from the combination, e.g., responsive to a user indication or flag. This approach allows, for instance, for certain reconstructed CT images to be left out of the combination if unwanted motion during the scanning procedure occurs that cannot be acceptably (to the user) compensated for by motion compensation.
In some examples, the method 200 may further comprise a step 245 of displaying the combined reconstructed CT image of the region of interest. Put another way, step 245 may further comprise controlling a portion of the user interface to provide a visual representation of the combined CT image depicting (e.g., the predetermined view of) the region of interest. The combined reconstructed CT image will have significantly better signal-to-noise than any of the reconstructed CT images (produced in step 220).
This approach of displaying the combined reconstructed CT image is particularly advantageous when the step 230 is performed, e.g., with the reconstructed CT images being iteratively and sequentially displayed, as it facilitates immediate identification (by a clinician) of movement within the region of interest, and therefore areas of the combined reconstructed CT image that are less trustworthy or accurate.
The method 200 may further comprise a step 250, for each pixel or voxel of the combined reconstructed CT image, processing the positionally corresponding pixel or voxel in each reconstructed CT image to produce a measure responsive to the differences between the values of the pixel of the combined reconstructed CT image and the positionally corresponding pixel or voxel of each reconstructed CT image. In this context, a positionally corresponding pixel or voxel is a pixel or voxel that represents a same portion of the subject as the pixel/voxel of the combined reconstructed CT image. This correspondence can be readily performed or determined if the combined reconstructed CT image provides a same view of the region of interest as each reconstructed CT
image - or the combined reconstructed CT image is spatially registered (using known registration techniques) to each reconstructed CT image.
In other words, step 250 may comprise of, for each pixel/voxel that represents a part of the region of interest in the combined reconstructed CT image, quantifying a respective difference between the value of the pixel/value and the value of the corresponding pixel/voxel that represents the same part of the region of interest in each reconstructed CT image (produced in step 220). For each pixel/voxel, the quantified respective differences are then processed or combined (e.g., summed, averaged or multiplied) to produce the measure for the pixel/voxel of the reconstructed CT image.
The measure for each pixel/voxel (produced from these differences) can effectively represent a measure of confidence in the value of the pixel/voxel of the combined reconstructed CT image. In particular, the greater the combined difference, the less confident the value of the combined reconstructed CT image (e.g., as there may be significant motion artefacts at the location represented by the pixel/voxel).
The calculated differences may be used to modify a visual representation of the combined reconstructed CT image, e.g., if provided in step 245. In particular, at least one visual property of each pixel or voxel of the visual representation of the combined reconstructed CT image may be responsive to the measure for the pixel or voxel of the combined reconstructed CT image.
This effectively provides a mechanism for controlling the appearance of a pixel or voxel of the combined reconstructed CT image dependent upon the confidence that the value for the pixel/voxel does not represent an area in which motion has occurred during the scanning procedure.
In some examples, for instance if step 270 is performed, the method 200 may further comprise a step 280 of performing one or more measurements using the identified reconstructed CT image. The measurements may, for instance, be anatomical measurements of one or more anatomical elements or features represented in the identified reconstructed CT image.
As an alternative example, for instance if step 240 is performed, step 280 may comprise performing one or more measurements using the combined reconstructed CT image.
Approaches for automatically performing anatomical measurements using a (combined) reconstructed CT image are well known in the art, and are not described for the sake of conciseness.
If performed, step 280 may further comprise displaying the one or more measurements. Put another way, step 280 may further comprise controlling a portion of the user interface to provide a visual representation of the one or more measurements.
The skilled person would be readily capable of developing a device for carrying out any herein described method. Thus, each step of the flow chart may represent a different action performed by processing circuitry of a device, e.g., carrying out instructions stored by a memory of the device.
Embodiments may therefore make use of processing circuitry. A processor is one example of processing circuitry which employs one or more microprocessors that may be programmed using software (e.g., microcode), e.g., stored by the memory of the device, to perform the required
functions. Processing circuitry may however be implemented with or without employing a processor, and also may be implemented as a combination of dedicated hardware to perform some functions and a processor (e.g., one or more programmed microprocessors and associated circuitry) to perform other functions.
Examples of processing circuitry components that may be employed in various embodiments of the present disclosure include, but are not limited to, conventional microprocessors, application specific integrated circuits (ASICs), and field-programmable gate arrays (FPGAs).
Processing circuitry is associated with memory, e.g., one or more storage media such as volatile and non-volatile computer memory such as RAM, PROM, EPROM, and EEPROM. The memory is encoded with one or more programs that, when executed on the processing circuitry, perform the required functions. One or more various memories may be fixed within the processing circuitry or may be transportable, such that the one or more programs stored thereon can be loaded into the processing circuitry.
It will be understood that disclosed methods are preferably computer-implemented methods. As such, there is also proposed the concept of a computer program comprising code means for implementing any described method when said program is run on a processing system, such as a computer or the processing circuitry of the previously mentioned device. Thus, different portions, lines or blocks of code of a computer program according to an embodiment may be executed by a processing system, computer or processing circuitry to perform any herein described method.
There is also proposed a non-transitory storage medium that stores or carries a computer program or computer code that, when executed by a processing system or processing circuitry, causes the processing system or processing circuitry to carry out any herein described method.
In some alternative implementations, the functions noted in the block diagram(s) or flow chart(s) may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. If the term "adapted to" is used in the claims or description, it is noted the term "adapted to" is intended to be equivalent to the term "configured to". If the term "arrangement" is used in the claims or description, it is noted the term "arrangement" is intended to be equivalent to the term "system", and vice versa.
A single processor or other unit may fulfill the functions of several items recited in the claims. If a computer program is discussed above, it may be stored/distributed on a suitable medium,
such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Any reference signs in the claims should not be construed as limiting the scope.
Claims
1. A computer-implemented method for processing projection data of a subject in Computed Tomography (CT) imaging, the computer-implemented method comprising: obtaining two or more different parts of the projection data, generated by a CT scanning system during a single scanning procedure, each part at least partially representing a same region of interest of the subject; separately processing each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest; and controlling a user interface to provide a visual representation of each reconstructed CT image of the same region of interest.
2. The computer-implemented method of claim 1, wherein, for each part of the projection data, the at least one reconstructed CT image comprises a CT image depicting a predetermined view of the region of interest.
3. The computer-implemented method of claim 2, wherein controlling the user interface comprises controlling a portion of the user interface to sequentially provide a visual representation of each CT image depicting the predetermined view of the same region of interest.
4. The computer-implemented method of claim 3, wherein controlling the user interface comprises controlling a part of the user interface to iteratively and sequentially provide a visual representation of each CT image depicting the predetermined view of the same region of interest.
5. The computer-implemented method of any of claims 1 to 4, wherein, for each part of the projection data, the CT image depicting a predetermined view of the region of interest is a two-dimensional CT image.
6. The computer-implemented method of any of claims 1 to 5, wherein each of the two or more different parts of the projection data represents a different temporal moment or period during the single scanning procedure.
7. The computer-implemented method of any of claims 1 to 6, further comprising jointly processing the two or more parts of the projection data to produce a combined reconstructed CT image of the same region of interest.
8. The computer-implemented method of claim 7, further comprising, for each pixel or voxel of the combined reconstructed CT image, processing the positionally corresponding pixel or voxel in each reconstructed CT image to produce a measure responsive to a difference between the values of the pixel of the combined reconstructed CT image and the value of the positionally corresponding pixel or voxel of each reconstructed CT image.
9. The computer-implemented method of claim 8, further comprising controlling a user interface to provide a visual representation of the combined reconstructed CT image, wherein at least one visual property of each pixel or voxel of the visual representation of the combined reconstructed CT image is responsive to the measure for the pixel or voxel of the combined reconstructed CT image.
10. The computer-implemented method of claim 9, wherein the at least one visual property of each pixel or voxel comprises a color of each pixel or voxel.
11. The computer-implemented method of any of claims 7 to 10, further comprising performing motion compensation on the combined reconstructed CT image using the at least one reconstructed CT image of each part of the projection data.
12. The computer-implemented method of any of claims 1 to 11, wherein the single scanning procedure is a single circular CT scanning procedure or a single helical CT scanning procedure.
13. A computer program product comprising computer program code means which, when executed on a computing device having a processing system, cause the processing system to perform all of the steps of the method according to any of claims 1 to 12.
14. A device for processing projection data of a subject in Computed Tomography (CT) imaging, the device comprising: processing circuitry; and a memory containing instructions that, when executed by the processing circuitry, configure the processing circuitry to: obtain two or more different parts of the projection data, generated by a CT scanning system during a single scanning procedure, each part at least partially representing a same region of interest of the subject; separately process each part of the projection data using a reconstruction algorithm to produce, for each part of the projection data, at least one reconstructed CT image of the same region of interest; and
control a user interface to provide a visual representation of each reconstructed CT image of the same region of interest.
15. A system comprising the device of claim 14 and the user interface.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463633073P | 2024-04-12 | 2024-04-12 | |
| US63/633,073 | 2024-04-12 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025214799A1 true WO2025214799A1 (en) | 2025-10-16 |
Family
ID=95250899
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/058715 Pending WO2025214799A1 (en) | 2024-04-12 | 2025-03-31 | Processing projection data |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025214799A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230145920A1 (en) * | 2021-11-11 | 2023-05-11 | GE Precision Healthcare LLC | Systems and methods for motion detection in medical images |
-
2025
- 2025-03-31 WO PCT/EP2025/058715 patent/WO2025214799A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230145920A1 (en) * | 2021-11-11 | 2023-05-11 | GE Precision Healthcare LLC | Systems and methods for motion detection in medical images |
Non-Patent Citations (3)
| Title |
|---|
| GIBBONS R J ET AL: "Assessment of regional left ventricular function using gated radionuclide angiography", AMERICAN JOURNAL OF CARDIOLOGY, CAHNERS PUBLISHING CO., NEWTON, MA, US, vol. 54, no. 3, August 1984 (1984-08-01), pages 294 - 300, XP023200042, ISSN: 0002-9149, [retrieved on 19840801], DOI: 10.1016/0002-9149(84)90186-3 * |
| SCHAFER, DIRK ET AL.: "Motion-compensated and gated cone beam filtered back-projection for 3-D rotational X-ray angiography", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 25, no. 7, 2006, pages 898 - 906 |
| VAN STEVENDAAL, U. ET AL.: "A motion-compensated scheme for helical cone-beam reconstruction in cardiac CT angiography", MEDICAL PHYSICS, vol. 35, no. 7, 2008, pages 3239 - 3251, XP012116151, DOI: 10.1118/1.2938733 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10748293B2 (en) | Tomography apparatus and method for reconstructing tomography image thereof | |
| US7221728B2 (en) | Method and apparatus for correcting motion in image reconstruction | |
| JP6243121B2 (en) | Method and apparatus for motion detection and correction in imaging scans using time-of-flight information | |
| US8811707B2 (en) | System and method for distributed processing of tomographic images | |
| CN107427274B (en) | Tomographic apparatus and method for reconstructing tomographic image thereof | |
| US9576391B2 (en) | Tomography apparatus and method of reconstructing a tomography image by the tomography apparatus | |
| US10213179B2 (en) | Tomography apparatus and method of reconstructing tomography image | |
| KR101725891B1 (en) | Tomography imaging apparatus and method for reconstructing a tomography image thereof | |
| US20060133564A1 (en) | Method and apparatus for correcting motion in image reconstruction | |
| US20110295113A1 (en) | Method and system for displaying medical images | |
| JP7159359B2 (en) | System for improving radiological imaging of moving volumes | |
| KR101775556B1 (en) | Tomography apparatus and method for processing a tomography image thereof | |
| KR101665513B1 (en) | Computer tomography apparatus and method for reconstructing a computer tomography image thereof | |
| US20080267455A1 (en) | Method for Movement Compensation of Image Data | |
| KR20170105876A (en) | Tomography apparatus and method for reconstructing a tomography image thereof | |
| US9858688B2 (en) | Methods and systems for computed tomography motion compensation | |
| EP3881288B1 (en) | Automated motion correction in pet imaging | |
| JP2023158015A (en) | Temporally gated three-dimensional imaging | |
| CN113520432B (en) | Gating methods for tomography systems | |
| WO2025214799A1 (en) | Processing projection data | |
| US20250268553A1 (en) | Systems and methods for contrast flow modeling with deep learning | |
| US20230419563A1 (en) | Method for use in x-ray ct image reconstruction | |
| CN120303703A (en) | Ensuring the quality of different contrast agent events in 3D dual energy X-ray imaging | |
| KR20160072004A (en) | Tomography apparatus and method for reconstructing a tomography image thereof | |
| JPWO2019044983A1 (en) | X-ray CT device and image generation method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25716081 Country of ref document: EP Kind code of ref document: A1 |