[go: up one dir, main page]

GB2453177A - Creating an enhanced image using information on the relationship of two images - Google Patents

Creating an enhanced image using information on the relationship of two images Download PDF

Info

Publication number
GB2453177A
GB2453177A GB0719076A GB0719076A GB2453177A GB 2453177 A GB2453177 A GB 2453177A GB 0719076 A GB0719076 A GB 0719076A GB 0719076 A GB0719076 A GB 0719076A GB 2453177 A GB2453177 A GB 2453177A
Authority
GB
United Kingdom
Prior art keywords
image data
image
mask
processing
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0719076A
Other versions
GB2453177C (en
GB2453177B (en
GB0719076D0 (en
Inventor
Thomas Edward Marchant
Christopher John Moore
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHRISTIE HOSPITAL NHS FOUNDATION TRUST
Original Assignee
CHRISTIE HOSPITAL NHS FOUNDATION TRUST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHRISTIE HOSPITAL NHS FOUNDATION TRUST filed Critical CHRISTIE HOSPITAL NHS FOUNDATION TRUST
Priority to GB0719076A priority Critical patent/GB2453177C/en
Publication of GB0719076D0 publication Critical patent/GB0719076D0/en
Priority to PCT/GB2008/002892 priority patent/WO2009040497A1/en
Publication of GB2453177A publication Critical patent/GB2453177A/en
Application granted granted Critical
Publication of GB2453177B publication Critical patent/GB2453177B/en
Publication of GB2453177C publication Critical patent/GB2453177C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • G06T7/0065
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A method for generating enhanced image data. First image data, for example CT image data, and second image data, for example CBCT image data is received. Data indicating a relationship between said first image data and said second image data is generated, for example in the form of a shading map. The generated data is applied to said second image data to generate enhanced image data.

Description

I
IMAGE ENHANCEMENT METHOD
The present invention relates to a method of enhancing image data. More particularly, the invention relates to a method of generating enhanced image data by processing first and second image data. The enhanced image data may be particularly appropriate for visual interpretation.
Various imaging methods are known in the art. Many such imaging methods find applications in medical imaging in which images of a patient or part of a patient are generated. Such imaging techniques are clinically useful in that they allow non-invasive investigation of a patient, therefore allowing appropriate diagnoses to be made.
One imaging method used in medical applications is computed tomography (CT) imaging A CT image is obtamed by acquiring measurements of a patient at a plurality of points along a longitudinal axis. Highly collimated X-ray fan beams are emitted perpendicular to a point on the longitudinal axis, through the patient, and attenuation of each fan beam is measured. The resulting measurements are tomographically reconstructed into a two dimensional slice depicting and physically characterising patient anatomy at a given longitudinal point, according to methods known in the art. A three-dimensional volume of image data, can be displayed as a plurality of two-dimensional slices taken at different longitudinal points.
Each image element (e.g. a pixel or voxel) in the image data represents the radiodensity, measured in Hounsfield units, of a point on a plane perpendicular to the longitudinal axis of the patient. The size, or resolution, of an image element is given by the lateral and vertical distances between a data point corresponding to the image element and data points corresponding to the nearest lateral and vertical image elements in the image, as well as the longitudinal thickness of the slice. Within a slice the data points reconstructed from the measurements are sampled on a regular grid and, therefore, all image elements in the image have the same resolution.
Longitudinally the resolution depends on the slice thickness that has been selected.
A further known imaging method is Cone Beam Computed Tomography (CBCT) imaging. This imaging method can be used as an alternative to fan beam CT imaging of the type described above. By emitting a less collimated, i.e. cone shaped, x-ray beam and measuring the attenuation of the beam after it passes through the patient, an image may be constructed, from the measured attenuation values, for each of a plurality of points along the longitudinal axis of the patient. however measured radiodensity values produced by CBCT imaging are subject to increased error as compared with values produced by fan beam CT imaging.
The use of CBCT imaging is beneficial in some applications as CBCT image data can be obtained more easily than CT image data. In particular, by increasing the cone beam angle CBCT image data can be obtained without having to move a patient from one longitudinal point to the next. Avoiding the need to move a patient from one position to another to allow the generation of image data is considered advantageous in some applications The lower quality of image data obtained using CBCT imaging is however disadvantageous.
It is an object of some embodiments of the present invention to obviate or mitigate at least some of the problems set out above. More particularly, but not exclusively, it is an object of particular embodiments of the present invention to provide a method allowing the enhancement of CBCT image data.
According to the present invention, there is provided, a method for generating enhanced image data, the method comprising receiving first image data and second image data, generating data indicating a relationship between said first image data and said second image data; and applying said generated data to said second image data to generate enhanced image data. The relationship may be an arithmetic relationship.
In this way relatively high quality first image data can be processed together with relatively low quality second image data so as to improve the quality of the second image data. Each of the first image data and the second image data may comprise a respective plurality of image elements, which can conveniently be pixels or voxels.
Generating data indicating a relationship between said first image data and said second image data may compnse processing each of said plurality of image elements in said first image data together with a respective image element in said second image data. That is pixel-wise or voxel-wise processing may be earned. Such processing may comprise dividing a value of each image element in said second image data by a value of a respective image element in said first image data to generate third image data. The third image data is referred to herein as a shading map. A smoothing function may be applied to said third image data.
The method may further comprise processing one or both of said first and second image data to generate processed first or second image data respectively Generating data indicating a relationship between said first image data and said second image data may then comprise generating data indicating a relationship between said processed first image data and said processed second image data.
Processing at least one of said first and second image data may comprise generating a mask indicating regions of said processed image data representing particular structures. The mask may be a binary mask. Generating the mask may comprise applying a threshold to values of image elements in the processed image data, such that image elements having a value satisfying the threshold have a corresponding mask element having a first value, while image elements having a value not satisfying the threshold have a corresponding mask element having a second value. The method may further comprise eroding areas of said mask representing a particular structure.
Each of the first and second image data may represent an image of a human or animal body. The mask may indicate regions of said image data representing bone andlor gas and/or regions of said image data representing tissue.
Processing at least one of said first and second image data may further comprise applying the generated mask to at least one of the first and second image data to generate masked image data. The method may further comprise processing said masked image data by generating values for image elements within masked regions of said image data from values for image elements within unmasked regions, for example using interpolation such as linear interpolation.
The method may further comprise appropnately pre-processing the first and second image data. Such pre-processing may be arranged to allow the first and second image data to be properly processed alongside one another. Accordingly, values of image elements in one of the first and second image data may he modified based upon values of image eenients in the other of the first and second image data, so as to arrange that both the first and second image data comprise image elements having comparable values. The pre-processing may compnse registenng said first and second image data with one another. The pre-processing may comprise modifying the spatial resolution of at least one of said first and second image data such that each of said first and second image data have substantially equal spatial resolution.
There is also provided a method of generating output image data, the method comprising: generating first enhanced image data using a method substantially as described above, generating second enhanced image data using a method substantially as descnbed above, and combining said first and second enhanced image data to generate as output enhanced image data.
Typically, the first and second enhanced image data are each generated using the method described above, although the masks discussed above are created using differing thresholds so as to create different enhanced image data. For example the first enhanced image data may be created by processing the first and second image data with reference to a mask differentiating between soft tissue on the one hand and bone and gas on the other The second enhanced image data may be created by processing the first and second image data with reference to a mask differentiating between soft tissue and bone on the one hand and gas on the other. The combination of enhanced image data in this way typically produces higher quality output data.
Combining the first and second enhanced image data may comprise generating a mask from one of said first enhanced image data and said second enhanced image data, and combining said first and second enhanced image data in accordance with said mask.
Generating said mask niay compnse applying a threshold to values of image element of said first or second enhanced image data, and optionally applying a morphological closing operation after application of said threshold. The generated mask may identify a particular structure within the enhanced image data, for example the mask may identify bone.
The first image data may be obtained in any convenient way. For example, the first image data may be obtained using computed tomography. The second image data may be obtained in any convenient way. For example, the second image data may be obtained using cone beam computed tomography.
The invention further provides a method for determining a treatment dose for a patient. The method comprises processing first image data obtained at a first time to determine an initial treatment dose; and processing second image data obtained at a second later time together with said first image data to generate enhanced image data, and generating a modified treatment dose from said enhanced image data.
In this way, where the first image data is obtained at a first time in a treatment regime, and the second image data is obtained as the treatment regime progresses, the second image data can be used to appropriately modify the treatment dose given its enhancement. Such a method is typically advantageous where the second image data is more easily obtainable than the first image data. The treatment may be radiation therapy, intended, for example, to shrink or eradicate a tumour.
Aspects of the present invention can be implemented in any convenient way including by way of suitable methods, apparatus and computer systems. Some embodiments of the invention provide coniputer programs configured to carry out the methods set out above. Such computer programs can be carried on appropriate computer readable media. Such media can include tangible media such as CD-ROMS, flash memory devices, hard disk drives and so on, and also include intangible media such as communications signals.
Embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which: Figure 1 is a flowchart providing an overview of operation of an embodiment of the invention; Figure 2 is a scheniatic illustration of operation of an embodiment of the present invention; Figure 3 is a high level flowchart showing the operations carried out in the embodiment of the present invention shown in Figure 2; Figure 4 is an image taken from a set of Computed Tomography (CT) image data, Figure 5 is an image taken from a set of Cone Beam Computed Tomography (CBCT) image data, Figure 6 is a flowchart showing part of the processing of Figure 3 in further detail; Figure 7 is a graph showing the distribution of pixel values in CT image data in dashed line and CBCT image data in solid line; Figure 8 is a graph showing the distribution of pixel values in CT image data in dashed line and CBCT image data in solid line after part of the processing of Figure 3, Figure 9 is a flowchart showing part of the processing of Figure 3 in further detail; Figure 10 is a bone mask created from the CT image data of Figure 4 by the processing of Figure 9; Figure 11 is an image of the bone mask of Figure 10 after performance of an erosion operation; Figure 12 is an image showing application of the mask of Figure 11 to the image of Figure 4; Figure 13 is an image showing the result of interpolation carried out on the image of Figure 12; Figure 14 is a mask created from the CBCT image shown in Figure 5; Figure 15 is an image showing the result of an erosion operation carned out on the mask of Figure 14; Figure 16 is an image showing the result of application of the mask of Figure 15 to the image of Figure 5; Figure 1 7 is an image showing the result of an interpolation operation carried out on the image of Figure 16; Figure 18 is a shading map created using the images of Figures of 13 and 17; Figure 19 is an image showing the result of a smoothing operation earned out on the shading map of Figure 18; Figure 20 is an image showing application of the smoothed shading map of Figure 19 to the image of Figure 5; Figure 21 is a niask created from the image of Figure 4; Figure 22 is an image showing the result of an erosion operation carried out on the mask of Figure 21; Figure 23 is an image showing the result of application of the mask of Figure 22 to the image of Figure 4; Figure 24 is an image showing the result of interpolation earned out on the image of Figure 23; Figure 25 is a mask created from the image of Figure 5; Figure 26 is an image showing the result of an erosion operation carried out on the mask of Figure 25; Figure 27 is an image showing the application of a mask of Figure 26 to the image of Figure 5; Figure 28 is an image showing the result of an interpolation operation carried out on the image of Figure 27; Figure 29 is a shading map created from the images of Figures 24 and 28; Figure 30 is a smoothed shading map created from the shading map of Figure 29, Figure 31 is an image showing application of the shading map of Figure 30 to the image of Figure 5, Figure 32 is a flowchart showing part of the processing of Figure 3 in further detail; Figure 33 is a niask created from the image of Figure 20, Figure 34 is an image showing the result of a closing operation carried out on the mask of Figure 33; Figure 35 is an image showing the result of filling the mask shown in Figure 34; and Figure 36 is an image showing combination of the images of Figures 20 and 31 using the bone mask of Figure 35.
An embodiment of the invention is now described in the context of medical imaging.
Referring to Figure 1, at step Si a computed tomography (CT) image (in the form of a slice sequence) of a patient is obtained. It will be appreciated that the CT image will, in general terms, be an image of a part of the patient relevant to a particular clinical procedure. CT images are generally of a high quality but are relatively difficult to obtain, not least because their generation requires the use of expensive imaging equipment. It is therefore often the case that where a patent is to undergo treatment (such as radiation therapy treatment), a sequence of CT image slices is obtained initially before treatment begins (as shown at step Si of Figure 1), but that it is impractical to regularly obtain CT image sequences as treatment progresses.
Therefore, at step S2, during the course of treatment, a cone beam computed tomography (CBCT) image of the patient (or the relevant part of the patient) is obtained. This is advantageous given that CBCT images can generally be obtained with the patient in the treatment position, and do not require the patient to be moved, as would be the case with CT imaging. The use of CBCT images is however disadvantageous in that such images are of lower quality that CT images, and have remained so since practical X-ray CBCT appeared in the 1980s. Accordingly, in the described embodiment of the invention, at step S3 the CT image and the CBCT image arc processed together in such a way that the CT image is used to improve the quality of the CBCT image, resulting in the output of an improved iniage at step S4. This is shown in Figure 2. Here, it can be seen that CT image data I and CBCT image data 2 are together input to an image processing process 3 which generates output data 4 which is an improved quality CBCT image. The image processing process 3 is described in further detail below.
Figure 3 is a flow chart showing the image processing process 3 at a high level. At step S5 both the CT image data 1 and the CBCT image data 2 are appropriately pre-processed. Two parallel streams of processing are then initiated. A first stream comprises steps S6a to S8a, while a second stream comprises steps S6b to S8b.
Although parts of this description refer to parallel processing to aid understanding, it will be appreciated that the two streams of processing can, in some embodiments, he earned out sequentially.
At step S6a, each of the CT image data I and CBCT image data 2 is processed individually. In each case parts of the respective image data representing bone or gas are removed, before appropriate interpolation from adjacent parts of the image data is carried out to avoid discontinuities in the image data. At step S7a a shading map is created by dividing the CBCT image data output from step S6a by the CT image data output from step S6a. Suitable smoothing is carried out at step S7a. At step S8a the shading map created at step S7a is applied to the CBCT image data, as output from the pre-processing of step S5, to generate as output first enhanced CBCT image data C1.
The processing of steps S6b to S8b of Figure 3 is similar to that of steps S6a to S8a.
However, at step S6b the processing carried out is such as to exclude only parts of each of the CT image data 1 and the CBCT image data 2 which represent air (i.e. not those which represent bone). Having excluded parts of each of the CT image data I and thc CBCT image data 2 which represent air, appropriate interpolation is also earned out at step S6b. At step S7b an appropriate shading map is created using the CT image data I and the CBCT image data 2 output from the processing of step S7a.
Appropriate smoothing is also earned out at step S7b to the CBCT image data output from the pre-processing of step S5 at step S8b to generate as output second enhanced CBC1' image data C2.
Froni the preceding description, it can be seen that two sets of enhanced CBCT image data are generated, one (C1) at step S8a and one (C2) at step S8b. At step S9 the first enhanced CBCT image data C1 is further processed to generate a mask indicating parts of the first processed CBCT image data C1 representing bone. At step SI 0 the mask created at step S9 is used to appropriately combine the first enhanced CBCT image data C1 and the second enhanced CBC'l' image data C2 to generate improved CBCT image data as the output data 4.
The processing of Figure 3 is now described in further detail.
Both the CT image data 1 and the CBCT image data 2 is respectively arranged in a plurality of slices, each slice compnsing an array of voxels. That is, each of the CT image data 1 and the CBCT image data 2 comprise a volume of voxels arranged in a plurality of slices. Figure 4 shows a slice of the CT image data 1, while Figure 5 shows a corresponding slice of the CBCT image data 2. It can be seen that the CT image data I provides a higher quality image than the CBCT image data 2 which shows some artefacts.
Figure 6 is a flowchart showing the pre-processing of step S5 of Figure 3 in further detail. At step S 11 voxel values of the CT image data 1 and the CBCT image data 2 are processed so as to arrange that the voxel values of each set of image data are similarly scaled. Specifically, Figure 7 shows voxel values of the CT image data 1 by way of a broken line Voxel values of the CBCT image data 2 are shown by way of a solid line. It can be seen that voxel values of the CT image data I define a peak 6 which represents voxels representing air, and a peak 7 which represents voxels representing tissue. Similarly, voxel values of the CBCT image data 2 define a peak 8 which represents voxels representing air and a peak 9 which represents voxels representing tissue. It can be seen that the peaks of the CT image data 1 and the CBCT image data 2 are not coincident In order to allow the CT image data I and the CBCT image data 2 to be processed together it is necessary to modify values of voxels of the CBCT image data 2 This is achieved by processing voxels in the CBCT image data 2 by multiplying values of those voxels by a determined scalar value, and adding a further scalar value to the result of the multiplication. Specifically: p' Ap + B where p is the initial voxel value; p' is the modified voxel value; and A and B are scalar values chosen so as to allow the peaks defined by voxel values of the CBCT image data 2 to be made coincident with peaks of the CT image data I. The values of A and B are determined by finding the values which minimize the sum of squared difference between the two histograms. Starting values of A and B are defined (such that A = 1.0, and B = (mean CT pixel value -mean CBCT pixel value)). The values of p' produced by these values of A and B are computed. The CT histogram is then smoothed slightly, to mitigate the difference in width of the peaks between CT and CBCT, and subtracted from the scaled CBCT histogram (p'). Each element in the array resulting from this subtraction is then squared and sum of the squared difference values is computed. The values of A and B are then iterated and the sum of the squared differences is computed at each iteration. The iteration is continued until a minimum in the sum of squared difference is found. The minimization is carried out using the downhill simplex method of Nelder and Mead, 1965, Computer Journal, Vol 7, pp 308-313.
In an alternative embodiment, the values A and B can be chosen by the user to give the best match as subjectively assessed by the user. This is sometimes necessary in cases where the automatic determination fails.
Having modified voxel values of the CBCT image data 2, voxel values of the CBCT image data 2 have a distnbution as shown in Figure 8. That is, while a broken line representing voxel values of the CT image data 1 again shows two peaks 6, 7 in positions which are the same as those of corresponding peaks in Figure 7, the solid line representing voxel values of the CBCT image data 2 shows two peaks 8', 9' which correspond to the peaks 8, 9 of Figure 7 but which have been moved so as to he coincident with voxel values represented by the peaks 6, 7.
Refernng hack to Figure 6, the processing of step S I I described above is arranged to modify voxel values of the CBCT image data 2 so as to be within a similar range to those of the CT image data 1. At step S 12 the CT image data I and the CBCT image data 2 are registered together, that is, the CBCT image data 2 is spatially modified, so as to be defined by a co-ordinate system common to the CT image data I and the CBCT image data 2. It will be appreciated that such registration is required so as to allow the CT image data I and the CBCT image data 2 to be processed together. More specifically, such registration allows respective points of the CT image data 1 and the CBCT image data 2 to he compared. The registration process of step S12 is carried out using a chamfer matching algorithm which is descnbed in van Herk M, Kooy HM. "Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT using chamfer matching". Medical Physics 1994;21(7):l 163-78, the contents of which are herein incorporated by reference.
Van Herk et a! describe a chamfer matching algorithm in the context of medical images. Van Herk et a! compare a number of different ways to implement chamfer matching for medical images. Methods for matching CT images are described, as are methods for matching a CT image with an MRI image and a CT image with a SPECT image. It has been found that the described method for matching two CT images can be applied to match a CT image and a CBCT image.
When selecting feature points from the CT image, the described method includes a step of reducing the number of points to speed up the calculation. The number to which the feature points are reduced is treated as a variable that can be adjusted, and results are presented for different values However, in a preferred embodiment of the present invention,there is no step where the number of feature points is reduced. This can be considered as using a value for the reduced number of points which is the same as the initial number of points.
Van Herk et al describes three different cost functions for the matching: rms distance, mean distance, and maximum distance A preferred embodiment of the present invention uses mean distance as a cost function. Van 1-lerk et al describes two different optimisation methods downhill simplex, and Powell's method, a preferred embodiment of the present invention uses downhill simplex optimization.
In general terms, the chamfer matching algonthm registers images represented by the CT image data 1 and the CBCT image data 2 by reference to bone structures within the two images. Voxels representing hone edges are identified in each image and the generalised distance between corresponding voxels in the two images is minimised by an appropriate registration operation, which may comprise any suitable transformation such as a rotation andlor translation.
Referring again to Figure 6, at step S13 an operation is camed out to ensure that the CT image data I and the CBCT image data 2 are of equal spatial resolution. The CT image data I will be defined by a plurality of voxels of typical size 0.95mm x 0.95mm x 5mm in the lateral, vertical and longitudinal directions respectively. The CBCT image data 2 will be defined by a plurality of voxels of typical size 1mm x 1mm x 1mm. It can therefore be appreciated that the CBCT image data 2 is of different spatial resolution than the CT image data 1. Accordingly, the CT image data I is processed so as to change its spatial resolution by interpolation in each of the three dimensions.
The processing of Figure 6 is therefore such as to arrange that each voxel of the CT image data 1 can be processed together with a corresponding voxel of the CBCT image data 2, the voxel values having being processed at step SI 1 so as to he comparable with one another.
Refcmng back to Figure 3, having described the pre-processing of step S5 with reference to Figure 6, the processing of steps S6a and S7a is now described with reference to Figure 9. CT image data I' which is output from the pre-processing of step S5 is input to processing of step S14 which generates a binary mask indicating regions of the image represented by the CT image data 1' which represent soft tissue, and regions which do not represent tissue. Typically voxels having values in the range 850 to 11 50 are considered to represent soft tissue and such voxels are set to have a value of I. All other voxels (i.e those considered to represent air or bone) are set to have a value of 0. Figure 10 shows the output of the processing of step S14 where the input is CT image data 1' as shown in Figure 4 after appropriate pre-processing. It can be seen that voxels representing bone or air are illustrated in black, while those representing tissue are shown in white.
Having generated an appropriate mask at step S14, this mask is further processed at step S15. Specifically an erosion operation is carried out using a 5mm structuring element. Erosion operations in general terms will be known to those of ordinary skill in the art. In general terms, the 5mm structunng element is centred on each voxel of the mask in turn. If any voxel within the structuring element at a particular position has a value of 0 (i.e. is considered to represent air or bone), all voxels within the structuring element are set to have a value of 0. It can accordingly be appreciated that the effect of the erosion operation is to expand regions of the CT image data 1 which represent air and bone, and reduc e regions of the CT image data I which represent soft tissue. The output of the erosion operation of step S15 when earned out on the mask of Figure 10 is shown in Figure ii.
The mask output from step S15 is then applied to the CT image data 1' at step S16, so as to remove from the CT image data I' regions of the CT image data which do not represent soft tissue. The output of step S16 when the mask of Figure his applied to the CT image data of Figure 4 is shown in Figure 12. Having removed regions of the image represented by the CT image data 1 which do not represent soft tissue, an interpolation operation is earned out at step S17, generating an image as shown in Figure 13. Any appropriate interpolation can he used to generate voxel values for parts of the CT image data I which do not represent soil tissue In a preferred embodiment of the invention linear interpolation is used for reasons of speed. Some embodiments of the invention are implemented using the Interactive Data Language (IDL) package available from ITT Visual Information Systems of Boulder, Colorado, USA The IDL package provides functions TRIANGULATE and TRTGRID which can conveniently be used to perform the necessary interpolation. The TRIANGULATE function constructs a Delaunay triangulation of a planar set of points, and the TRIGRID function can then be used to carry out the required interpolation.
It can be seen from Figure 9 that similar processing to that described above is camed out on CBCT image data 2' which is output from the pre-processing of step S5.
Specifically, at step S18 an appropriate threshold is applied to voxels of the CBCT image data 2 shown in Figure 5, to generate a mask of the form shown in Figure 14.
Here, the threshold applied is such that voxels having values between 600 and 1350 are considered to represent soft tissue and are set to a value of 1 while all other voxels are set to a value of 0 A larger range of voxel values is used in connection with the CBCT image data 2' (as compared with the CT image data 1') due to greater intensity variations in the CBCT data 2'.
At step S19 an erosion operation is carried out on the mask generated at step S18 and shown in Figure 14, generating a mask as shown in Figure 15. Again, the erosion operation uses a 5mm cube structuring element, and works analogously to the erosion operation of step S15. The mask of Figure 15 is applied to the CBCT image data 2 shown in Figure 5 at step S20, resulting in the generation of an image as shown in Figure 16. At step S21 the masked image data is interpolated as described with reference to step S17, resulting in the generation of an image as shown in Figure 17.
At step S22 of Figure 9 the interpolated CBCT image data output from interpolation operation of step S21 and shown in Figure 13 is divided by the CT image data output from the interpolation operation of step Si 7 and shown in Figure 1 7 to generate an iniage shown in Figure 18, which is referred to as a shading map. This division is carried out by dividing pairs of voxels in turn, one voxel of each pair is being taken from the interpolated CBCT data and the other voxel of each pair being taken from the interpolated CT data. The shading map is smoothed at step S23, by the application of a 15mm boxcar average function, resulting in the generation of a smoothed shading map as shown in Figure 19. Smoothing using a boxcar average function essentially processes blocks of voxels in turn and replaces a voxel value for a voxel at an origin of a block with an average of all voxel values in the block.
The output of step S23 is the output of step S7a of Figure 3. Referring to Figure 3, at step S8a the CBCT image data output from the processing of step S5 is divided by the smoothed shading map of Figure 19 resulting in the generation of first enhanced CBCT image data, as shown in Figure 20.
As indicated above, the processing of steps S6b and S7b is equivalent to the processing of steps S6a and S7a, save that a different mask is created. That is, the processing of steps S6b and S7b is the same as that illustrated in Figure 9, subject to a modification to the thresholds applied at steps Sl4 and S18. Specifically, previously voxels representing soft tissue were set to a value of I, while all other voxels (specifically those representing bone or air) were set to a value of 0. In the processing of steps S6b and S7b the thresholds applied at step S14 and S18 are such as to exclude voxels representing air, but retain voxels representing bone. That is, voxels representing air arc set to 0 while voxels representing soft tissue or bone are set to I. This will involve applying a threshold such that voxels in the CT data I' having a value a value greater than 850 and voxels in the CBCT data 2' having a value greater than 600 are set to a value of 1, while all other voxels are set to a value of 0.
When the processing of steps S6b and S7b is earned out, the mask generated at step S14 is that shown in Figure 21. The erosion operation of step S15 generates the mask shown in Figure 22, which is applied to the CT image data 1' as output from the pre-processing of step S5 at step S16 to generate image data as shown in Figure 23 Interpolation carried out at step SI 7 generates an image as shown in Figure 24.
Additionally, at step S18 the mask shown in Figure 25 is created by applying an appropriate threshold to the CBCT image data 2' which differs from that applied for corresponding processing described with reference to steps S6a and S7a of Figure 3 The mask created at step SI 8 is eroded at step Si 9 to create the mask of Figure 26.
When the mask of Figure 26 is applied to the CBCT image data 2' output from the pre-processing of Figure 5 at step S20, image data as shown in Figure 27 is generated.
The interpolation of step S21 generates an image as shown in Figure 28.
Again, the processing of steps S6b and S7b includes the operations of steps S22 and S23. At step S23 the image data shown in Figure 28 generated from the CBCT image data 2 is divided by the image data shown in Figure 24 generated from the CT image data I. The resulting shading map is shown in Figure 29. The smoothing of step S23 is applied to the shading map of Figure 29 to generate a smoothed shading map as shown in Figure 30.
Referring again to Figure 3, at step S8b the CBCT image data 2' output from the pre-processing of Figure 5 is divided by the smoothed shading map shown in Figure 30 to create second enhanced CBCT image data as shown in Figure 31.
At step S9 of Figure 3, the first enhanced image data generated at step S8a and shown in Figure 20 is processed as shown in Figure 32. At step S24 a thresholding operation is camed out on the image shown in Figure 20 such that all voxels having a value greater than 1150 (which voxels are considered to represent regions of bone) are set to a value of 1, while all other voxels are set to a value of 0. This results in the generation of a bone mask as shown in Figure 33.
At step S25 the mask of Figure 33 is subjected to a dilation operation using a 25mm spherical structunng element, before an erosion operation using the same structuring element is carried out at step S26. The dilation operation involves centnng the structuring element at each voxel of the mask in turn, if any voxel enclosed by the structuring element have a value of I, all voxels enclosed by the structuring elements are set to have a value of 1. Erosion operates as described above. The combination of dilation and erosion make up a morphological closing operation, the result of which is shown in Figure 34. The mask of Figure 34 is further processed at step S37 by filling in "gaps" in the bone structure. This is achieved by replacing any voxels or groups of voxels having a value of 0 which are wholly enclosed by voxels having a value of I with voxels having a value of 1. The result of the processing of step S37 is shown in Figure 35.
The mask created at step S37 and shown in Figure 35 is then used at step SlO (Figure 3) to generate an output image by combining the enhanced image data output from steps S8a and SSb. Specifically, regions of the output image data within the bone regions of the mask of Figure 35 have values determined by the enhanced image data output from step S8b and shown in Figure 31. Regions of the output image data outside the bone regions of the mask of Figure 35 have values determined by the enhanced image data output from step S8a and shown in Figure 20. The image output at step Sb is shown in Figure 36.
In the preceding description it has been indicated that some of the figures represent particular image data. It will be appreciated that given that both the CT image data I and the CBCT image data 2 are both three-dimensional, figures illustrating image data in fact represent a slice of that image data It will be appreciated that the generation of improved image data, such as that output from the processing of step SlO of Figure 3 has a number of applications. For example, it is currently the case that CT image data is used to generate data used to calculate doses of radiotherapy to be applied to a patient for treatment of a particular tumour. Such doses will be computed with reference to the size and location of the tumour and will specify the quantity of the radiation dose to he applied together with the location at which the radiation should be applied, the time for which radiation should be applied, and the frequency of application of radiation.
CBCT image data is typically obtained during treatment, but is of insufficient accuracy to allow a radiotherapy dose to he modified should the tumour have grown or shrunk during treatment. As indicated above, obtaining CBCT image data during treatment is advantageous given that it is typically more easily obtained at the point of treatment than corresponding CT image data.
Using the methods described herein, it is possible to process the CBCT image data obtained during treatment together with the initially obtained CT image data to generate CBCT image data of improved quality which can be used to modify radiotherapy dosage.
It will be appreciated that although preferred embodiments of the invention have been described above, various modification can be made to the described embodiments without departing from the spirit and scope of the present invention, as defined by the appended claims. In particular, it will be appreciated that although embodiments of the invention have been described with reference to medical data, the invention is not limited to medical applications. For example, the invention also has applications in the field of terrain mapping. Here, two images may be obtained such that the effects of lighting or cloud cover degrade one of the images. The two images may be processed using the method described above such that the image which is degraded is improved by use of the other image Such a method can be useful where a high quality image of particular terrain exists, but transient images of lower quality are obtained. In such a case the high quality image can be used to improve the quality of the lower quality transient images.

Claims (38)

1. A method for generating enhanced image data, the method comprising receiving first image data and second image data, wherein each of said first and second image data comprises a respective plurality of image elements; generating data indicating a relationship between a first region of said first image data arid a second region of said second image data, each of said first arid second regions comprising a plurality of image elements; applying said generated data to said second image data to generate enhanced image data.
2. A method according to claim 1, wherein said relationship is an arithematic relationship.
3. A method according to claim 2, wherein said processing comprises performing a mathematical operation on a value of each image element in said second image data and a value of a respective image element in said first image data to generate third image data.
4. A method according to claim 3, wherein said processing comprises dividing a value of each image element in said second image data by a value of a respective image element in said first image data to generate third image data.
5. A method according to claim 3, wherein said processing compnses subtracting a value of each image element in said second image data from a value of a respective image element in said first image data to generate third image data.
6. A method according to claims 3, 4 or 5, further comprising applying a smoothing operation to said third image data.
7. A method according to any preceding claim, further comprising processing one of said first and second image data to generate processed first or second image data respectively.
8. A method according to any one of claims I to 6, compn sing processing each of said first and second image data to generate processed first and second image data.
9. A method according to claim 8, wherein generating data indicating a relationship between said first image data and said second image data comprises generating data indicating a relationship between said processed first image data and said processed second image data.
A method according to claim 8 or 9, wherein said processing identifies regions of each of said first and second image data which are non-comparable.
11. A method according to claim 10, wherein said regions of said first and second image data which are non-comparable represent bone and/or gas.
12. A method according to claim 10 or 11, further comprising modifying values of image elements in said non-comparable regions based upon values of image elements in adjacent regions.
13. A niethod according to claim 7, 8 or 9 wherein processing at least one of said first and second image data comprises generating a mask indicating regions of said processed image data representing particular structures.
14. A method according to claim 13, wherein said mask is a binary mask.
15. A method according to claim 14, wherein generating said mask comprises applying a threshold to values of image elements in the processed image data, such that image elements having a value satisfying the threshold have a corresponding mask element having a first value, while image elements having a value not satisfying the threshold have a corresponding mask element having a second value
16. A method according to claim 13, 14 or 15, further compnsing eroding areas of said mask representing a particular structure.
17. A method according to claim 13, 14, 15 or 16, wherein each of said first and second image data represent an image of a human or animal body and said mask indicates regions of said image data representing hone andlor gas.
18. A method according to claini 13, 14, 15, 16 or 17, wherein each of said first and second image data represent an image of a human or animal body and said mask indicates regions of said image data representing tissue.
19. A method according to any one of claims 13 to 18, wherein processing at least one of said first and second image data further compnses applying the generated mask to at least one of the first and second image data to generate masked image data
20. A method according to claim 19, further comprising processing said masked image data by generating values for image elements within masked regions of said image data from values for image elements within unmasked regions.
21 A method according to claim 20, wherein generating values for image elements within masked regions of said image data comprises an interpolation operation.
22. A method according to any preceding claim, further comprising pre-processing said first and second image data.
23. A method according to claim 22, wherein said pre-processing comprises modifying values of one of said first and second image data based upon values of the other of said first and second image data.
24. A method according to claim 22 or 23, wherein said pre-processing comprises registering said first and second image data with one another.
25. A method according to claim 22, 23 or 24, wherein said pre-processing compnses modifying the spatial resolution of at least one of said first and second image data such that each of said first and second image data have substantially equal spatial resolution.
26. A method of generating output image data, the method comprising.
generating first enhanced image data using a method according to any preceding claim; generating second enhanced image data using a method according to any preceding claim; combining said first and second enhanced image data to generate output enhanced image data.
27. A method according to claim 26, wherein combining said first and second enhanced image data comprises generating a mask from one of said first enhanced image data and said second enhanced image data, and combining said first and second enhanced image data in accordance with said mask.
28. A method according to claim 27, wherein generating said mask compnses applying a threshold to values of image element of said first or second enhanced image data.
29. A method according to claim 28, wherein generating said mask comprises applying a morphological closing operation after application of said threshold.
30. A method according to any preceding, wherein said image elements are pixels or voxels.
31. A method according to any preceding claim, wherein said first image data is obtained using computed tomography and said second image data is obtained using cone beam computed tomography.
32. A computer readable medium carrying computer readable instructions configured to cause a computer to carry out a method according to any preceding claim.
33. A computer apparatus for generating enhanced image data, the apparatus comprising: a program memory stonng processor readable instructions; and a processor configured to read and execute instructions stored in said program memory; wherein the processor readable instructions comprise instructions configured to cause the computer to carry out a method according to any one of claims 1 to 31.
34. A method for determining a treatment dose for a patient, the method compnsing: processing first image data obtained at a first time to determine an initial treatment dose; and processing second image data obtained at a second later time together with said first image data to generate enhanced image data, and generating a modified treatment dose from said enhanced image data.
wherein generating enhanced image data compnses carrying out a method according to any one of claims I to 31.
35. A method according to claim 34, wherein said treatment is radiation therapy.
36. A method according to claim 34 or 35, wherein said first image data is generated using a computed tomography method and said second image data is generated using a cone beam computed tomography method
37. A computer readable medium carrying computer readable instructions configured to cause a computer to carry out a method according to any one of claims 34 to 36
38. A computer apparatus for determining a treatment dose for a patient, the apparatus comprising: a program memory storing processor readable instructions; and a processor configured to read and execute instructions stored in said program memory; wherein the processor readable instructions comprise instructions configured to cause the computer to cany out a method according to any one of claims 34 to 36.
GB0719076A 2007-09-28 2007-09-28 Image enhancement method Expired - Fee Related GB2453177C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0719076A GB2453177C (en) 2007-09-28 2007-09-28 Image enhancement method
PCT/GB2008/002892 WO2009040497A1 (en) 2007-09-28 2008-08-28 Image enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0719076A GB2453177C (en) 2007-09-28 2007-09-28 Image enhancement method

Publications (4)

Publication Number Publication Date
GB0719076D0 GB0719076D0 (en) 2007-11-07
GB2453177A true GB2453177A (en) 2009-04-01
GB2453177B GB2453177B (en) 2010-03-24
GB2453177C GB2453177C (en) 2010-04-28

Family

ID=38701927

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0719076A Expired - Fee Related GB2453177C (en) 2007-09-28 2007-09-28 Image enhancement method

Country Status (2)

Country Link
GB (1) GB2453177C (en)
WO (1) WO2009040497A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036899B2 (en) 2010-06-11 2015-05-19 Katholieke Universiteit Leuven Method for quantifying local bone changes
GB2533801A (en) * 2014-12-31 2016-07-06 Gen Electric Method and system for tomosynthesis projection images enhancement
US10535167B2 (en) 2014-12-31 2020-01-14 General Electric Company Method and system for tomosynthesis projection image enhancement and review

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062997B (en) * 2019-12-09 2023-09-12 上海联影医疗科技股份有限公司 Angiography imaging method, system, equipment and storage medium
CN111192268B (en) * 2019-12-31 2024-03-22 广州开云影像科技有限公司 Medical image segmentation model construction method and CBCT image bone segmentation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030156747A1 (en) * 2002-02-15 2003-08-21 Siemens Aktiengesellschaft Method for the presentation of projection images or tomograms from 3D volume data of an examination volume
WO2004008969A2 (en) * 2002-07-23 2004-01-29 Ge Medical Systems Global Technology Company, Llc Methods and apparatus for motion compensation in image reconstruction
WO2007056082A1 (en) * 2005-11-03 2007-05-18 Siemens Medical Solutions Usa, Inc. Automatic change quantification from medical image sets

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070047834A1 (en) * 2005-08-31 2007-03-01 International Business Machines Corporation Method and apparatus for visual background subtraction with one or more preprocessing modules
WO2009004571A1 (en) * 2007-07-05 2009-01-08 Koninklijke Philips Electronics N.V. Method and apparatus for image reconstruction
US8144953B2 (en) * 2007-09-11 2012-03-27 Siemens Medical Solutions Usa, Inc. Multi-scale analysis of signal enhancement in breast MRI

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030156747A1 (en) * 2002-02-15 2003-08-21 Siemens Aktiengesellschaft Method for the presentation of projection images or tomograms from 3D volume data of an examination volume
WO2004008969A2 (en) * 2002-07-23 2004-01-29 Ge Medical Systems Global Technology Company, Llc Methods and apparatus for motion compensation in image reconstruction
WO2007056082A1 (en) * 2005-11-03 2007-05-18 Siemens Medical Solutions Usa, Inc. Automatic change quantification from medical image sets

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036899B2 (en) 2010-06-11 2015-05-19 Katholieke Universiteit Leuven Method for quantifying local bone changes
GB2533801A (en) * 2014-12-31 2016-07-06 Gen Electric Method and system for tomosynthesis projection images enhancement
GB2533801B (en) * 2014-12-31 2018-09-12 Gen Electric Method and system for tomosynthesis projection images enhancement
US10092262B2 (en) 2014-12-31 2018-10-09 General Electric Company Method and system for tomosynthesis projection images enhancement
US10535167B2 (en) 2014-12-31 2020-01-14 General Electric Company Method and system for tomosynthesis projection image enhancement and review

Also Published As

Publication number Publication date
GB2453177C (en) 2010-04-28
GB2453177B (en) 2010-03-24
WO2009040497A1 (en) 2009-04-02
GB0719076D0 (en) 2007-11-07

Similar Documents

Publication Publication Date Title
Thummerer et al. Comparison of CBCT based synthetic CT methods suitable for proton dose calculations in adaptive proton therapy
Kurz et al. CBCT correction using a cycle-consistent generative adversarial network and unpaired training to enable photon and proton dose calculation
Kida et al. Cone beam computed tomography image quality improvement using a deep convolutional neural network
CN109035197B (en) CT radiography image kidney tumor segmentation method and system based on three-dimensional convolution neural network
Yang et al. 4D‐CT motion estimation using deformable image registration and 5D respiratory motion modeling
Marchant et al. Shading correction algorithm for improvement of cone-beam CT images in radiotherapy
US8000435B2 (en) Method and system for error compensation
Wei et al. X-ray CT high-density artefact suppression in the presence of bones
EP1316919B1 (en) Method for contrast-enhancement of digital portal images
US8594407B2 (en) Plane-by-plane iterative reconstruction for digital breast tomosynthesis
RU2556428C2 (en) Method for weakening of bone x-ray images
Marchant et al. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods
Zhao et al. Patient-specific scatter correction for flat-panel detector-based cone-beam CT imaging
CN107767436B (en) Volume rendering with segmentation to prevent color bleed
Zhang An unsupervised 2D–3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation
US10282872B2 (en) Noise reduction in tomograms
WO2007148263A1 (en) Method and system for error compensation
JP7104034B2 (en) Bone and hard plaque segmentation in spectral CT
Wu et al. Iterative CT shading correction with no prior information
EP2958494A1 (en) Structure propagation restoration for spectral ct
Xu et al. An algorithm for efficient metal artifact reductions in permanent seed implants
Bär et al. Improving radiotherapy planning in patients with metallic implants using the iterative metal artifact reduction (iMAR) algorithm
Schnurr et al. Simulation-based deep artifact correction with convolutional neural networks for limited angle artifacts
Alam et al. Generalizable cone beam CT esophagus segmentation using physics-based data augmentation
Li et al. Multienergy cone-beam computed tomography reconstruction with a spatial spectral nonlocal means algorithm

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20140928

S28 Restoration of ceased patents (sect. 28/pat. act 1977)

Free format text: APPLICATION FILED

S28 Restoration of ceased patents (sect. 28/pat. act 1977)

Free format text: RESTORATION ALLOWED

Effective date: 20150807

PCNP Patent ceased through non-payment of renewal fee

Effective date: 20180928