WO2015175848A1 - Système et procédé de localisation automatique de structures dans des images de projection - Google Patents
Système et procédé de localisation automatique de structures dans des images de projection Download PDFInfo
- Publication number
- WO2015175848A1 WO2015175848A1 PCT/US2015/030913 US2015030913W WO2015175848A1 WO 2015175848 A1 WO2015175848 A1 WO 2015175848A1 US 2015030913 W US2015030913 W US 2015030913W WO 2015175848 A1 WO2015175848 A1 WO 2015175848A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- registration
- images
- computing
- synthesis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0035—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
- A61B5/0042—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5247—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
- A61B2576/02—Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
- A61B2576/026—Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/4808—Multimodal MR, e.g. MR combined with positron emission tomography [PET], MR combined with ultrasound or MR combined with computed tomography [CT]
- G01R33/4812—MR combined with X-ray or computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
- G06T2207/10121—Fluoroscopy
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
- G06T2207/30012—Spine; Backbone
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
Definitions
- the present invention relates to methods for localization and identification of structures in a 2D image.
- Hybrid medical imaging systems such as positron emission tomography / computer tomorgraphy (PET/CT) and PET / magnetic resonance (PET/MR) are routinely used as diagnostic tools for brain imaging in clinical and research environments.
- PET/CT systems X-ray attenuation coefficients from computed tomography (CT) images are used for attenuation correction of PET images.
- CT computed tomography
- PET/MR systems have been introduced in diagnostic applications.
- a significant challenge for PET/MR systems is that the intensities of a brain magnetic resonance (MR) image is based on magnetic properties (e.g., proton density, the longitudinal and transverse relaxation times) that, unlike in computer tomography (CT), have no straightforward relation to electron density, which determines y photon attenuation.
- MR brain magnetic resonance
- segmentation based methods have been reported to synthesize attenuation coefficient maps ( ⁇ -maps) from MR images.
- atlas is synonymous with "reference”.
- Segmentation based methods rely on 3 or 4 class segmentations of the MR image (e.g., soft tissue, fat, air and bone) either using a Dixon-based approach or intensity-based segmentation of the MRI. Then typical CT intensities are assigned to the corresponding tissue labels to create a CT-like image.
- Dixon-based and intensity-based approaches often ignore bone or perform poorly with respect to reconstructing bone. This is because standard clinical MR scans do not show any signal for bone.
- Ultra-short echo time (UTE) imaging is a relatively new MRI technique that enables imaging of structures with short T2 relaxation times such as bone. Combined with an image with a longer echo time, where bone produces extremely low signal, a better bone segmentation can be obtained from dual echo UTE.
- Atlas based methods rely on learning a regression from MR intensities to CT Hounsfield units (HU). Instead of just using voxel intensities, patches or sub-images are preferred to estimate CT numbers, because a patch encodes neighborhood information around a voxel.
- one "atlas" consists of one MR image and its corresponding CT image. Multiple atlas MR images are first deformably registered to a subject's MR image. For a patch in the subject MR, multiple relevant patches from the atlas MR images are found. Then corresponding CT patches are combined to estimate the CT numbers for the subject patch. Atlas-based methods usually require that the atlas MR is well aligned to the subject.
- the deformable registration algorithms that are used for this purpose can be computationally expensive and the final quality of the PET reconstruction invariably depends on the accuracy of the registration. This can be problematic when the geometry of the anatomy between the atlas and the subject substantially differ or the subject exhibits pathology. [0009] II. Background to Localization
- Wrong site surgery is a surprisingly common error in medical practice with major ramification to the patient and healthcare system. It not only results in failure to deliver proper therapy to the patient, but it also has profound medical, legal and social implications.
- spinal surgery for example, the potential for wrong-site surgery (viz., "wrong level” surgery, referring to the level of vertebral body) is significant due to the difficulty of localizing the target vertebrae based solely on visual impression, palpation and fluoroscopic imaging. Vertebrae in the mid-thoracic region can be particularly challenging to localize, since they have fairly similar visual and radiographic appearance and are at a distance from unambiguous anatomical landmarks.
- a common method to accurately localize a given vertebral level is to "count" vertebrae under fluoroscopy, typically beginning at the sacrum and then “counting" under fluoroscopic visualization up to the targeted vertebral level. Such a method involves an undesirable amount of time and ionizing radiation. Even with fluoroscopic counting, surgery delivered to the wrong level is a fairly frequent occurrence. According to a questionnaire study of 3,505 surgeons, carrying out 1,300,000 procedures, 418 (0.032% or 1 in 3,110) wrong-level spine surgeries were performed [see Mody M G, Nourbakhsh A, Stahl D L, Gibbs M, Alfawareh M, Garges K J., "The prevalence of wrong level surgery among spine surgeons," Spine (Phila Pa.
- a method for automatically registering medical images can include obtaining one or more three-dimensional (3D) magnetic resonance (MR) images of a target area; obtaining a two-dimensional (2D) image of at least a portion of the target area; computing a MR-computed tomography (CT) synthesis from the 3D MR image that was obtained;
- 3D three-dimensional
- 2D two-dimensional
- CT MR-computed tomography
- the 3D image is obtained from a 3D acquisition device selected from the group consisting of: CT, SPECT, MR, linear accelerator with on-board X-ray imaging, C-arm with 3D imaging capability, and an X-ray-based modality.
- the outputting the result includes providing an image of a common representation of the registered 3D and 2D images.
- the outputting the result includes controlling a medical intervention by providing at least one of: 3D surgical guidance, navigation support, structural localization guidance, improved confidence as an independent check on existing methods of structure localization.
- the 3D image and the 2D image are obtained separately.
- computing the 3D-2D registration includes: iteratively computing a 2D projection view from the 3D image; and computing a target 2D projection view that best matches the 2D image by using a numerical optimization that maximizes similarity between the target 2D projection view and the 2D image.
- the method includes estimating an approximate pose between the 3D image and the 2D image as an initial estimate for the numerical optimization.
- the target area includes a portion of a human spine, where the 2D image includes an intraoperative radiograph, where the result of the 3D-2D registration includes data representing a registration of spine labels onto the intraoperative radiograph, such that vertebral localization is achieved.
- the method includes computing a quality metric representing a the 3D-2D registration; and displaying a representation of the metric.
- the representation of the metric includes a color code.
- a device includes at least one processor; and a memory containing instructions for execution by the at least one processor, such that the instructions cause the processor to perform the method for automatically registering medical images, the method including: obtaining one or more three-dimensional (3D) magnetic resonance (MR) images of a target area; obtaining a two-dimensional (2D) image of at least a portion of the target area computing a MR-computed tomography (CT) synthesis from the 3D MR image that was obtained; computing a 3D-2D registration based on the MR-CT synthesis and the 2D image that was obtained; and outputting a result of the 3D-2D registration.
- 3D three-dimensional
- 2D two-dimensional
- the 3D image is obtained from a 3D acquisition device selected from the group consisting of: CT, SPECT, MR, linear accelerator with onboard X-ray imaging, C-arm with 3D imaging capability, and an X-ray-based modality.
- the outputting the result includes providing an image of a common representation of the registered 3D and 2D images.
- the outputting the result includes controlling a medical intervention by providing at least one of: 3D surgical guidance, navigation support, structural localization guidance, improved confidence as an independent check on existing methods of structure localization.
- the 3D image and the 2D image are obtained separately.
- computing the 3D-2D registration includes: iteratively computing a 2D projection view from the 3D image; and computing a target 2D projection view that best matches the 2D image by using a numerical optimization that maximizes similarity between the target 2D projection view and the 2D image.
- the method performed by the device includes estimating an approximate pose between the 3D image and the 2D image as an initial estimate for the numerical optimization.
- the target area includes a portion of a human spine, where the 2D image includes an intraoperative radiograph, where the result of the 3D-2D registration includes data representing a registration of spine labels onto the intraoperative radiograph, such that vertebral localization is achieved.
- the instructions further cause the processor to: compute a quality metric representing a the 3D-2D registration; and display a
- the representation of the metric includes a color code.
- FIG. 1 illustrates an example block diagram of the GENES IS- MR-CT synthesis algorithm, according to embodiments.
- FIG. 2 illustrates an example portion of the synthesis algorithm, according to embodiments.
- FIG. 3 illustrates another example portion of the synthesis algorithm, according to embodiments.
- FIG. 4 illustrates yet another example portion of the synthesis algorithm, according to embodiments.
- FIG. 5 illustrates still another example portion of the synthesis algorithm, according to embodiments.
- Bottom row shows Siemens Dixon, Siemens UTE based ⁇ -map and our GENESIS result for the subject.
- FIGS. 7A-7C show corresponding axial sections of ⁇ -maps of a subject from original CT (A), GENESIS (B), and deformable registration (C) demonstrate that, visually, the GENESIS ⁇ -map is closer to the original CT based ⁇ -map than is that obtained by deformable registration.
- the cystic lesion in the left frontal lobe (arrow 705) is well represented by GENESIS but not by deformable registration.
- the dilation of the right lateral ventricle (arrow 715) is not represented in the deformable registration.
- misregistration in the posterior fossa mislabels much of the cerebellum as bone (arrow 710).
- FIGS. 8A and 8 B s h ow a comparison of tissue classification results for (A) bone and (B) air across the different methods as compared to the gold standard original CT.
- GENESIS most closely corresponds to the gold standard. Note that the Siemens Dixon method does not allow for bone classification and hence is not represented in (A).
- FIGS. 9A-9 D shows a comparison of final attenuation correction process for a single subject using the different methods.
- Initial MRI UTE images (A) were converted into ⁇ -maps (B), generated using the Siemens Dixon and UTE, deformable registration and GENESIS are compared to the gold standard CT. Note the blurring of bone introduced by the deformable registration method (white arrow in B).
- the attenuation corrected PET images (C) appear grossly similar
- the images (D) representing the absolute difference between each of the 4 methods and the original CT based attenuation corrected PET demonstrate marked differences. Note that the colorbar for the difference images represents a 10-fold increase in scale relative to that for the original images.
- FIG. 10 shows scatter plots of CT-based PET intensities vs. MR-based PET intensities ( x 10 4 ) at each voxel of the PET images are shown for Siemens Dixon, UTE, registration and GENESIS. Solid lines indicate unit slope and dotted lines are a robust linear fit of the data.
- FIG. 11 s hows a comparison of GENESIS results using different reference data.
- Top row shows UTE and CT ⁇ -map of a subject. The similarity of all 5 images in the bottom row, each generated using a different atlas, indicates the robustness of the GENESIS method and its relative independence to choice of reference atlas.
- FIG. 12 is a flowchart showing an overview of a system for carrying out a method according to the invention, including preoperative and intraoperative steps.
- FIG. 13 is a flowchart showing method steps for localization
- FIG. 14 schematically illustrates a related LevelCheck method that uses a 3D CT image.
- FIG. 15 schematically illustrates an example MR-LevelCheck method that uses a 3D MR image, according to embodiments.
- FIG. 16 schematically illustrates an example 3D-2D registration method that can be used in the method of FIG. 15, according to embodiments.
- FIG. 17 is a flowchart depicting a method according to some embodiments.
- FIG. 18 illustrates an example computer system that can, in part, implement the methods of FIG. 14 and/or Fig. 15, according to some embodiments.
- Section I presents a technique for, for example, synthesizing a CT image from an MR image.
- Section II presents a technique for localizing and identifying structures in projection images.
- Section III presents a technique that combines the material from Sections I and II to achieve a technique for automatically localizing and identifying structures of interest in 2D images (e.g., interoperative radiographs or fluoroscopy) that are synthesized from 3D images (e.g., preoperative MRI).
- the material of Section III may utilize any synthesis technique, not limited to the example techniques of Section I.
- This section presents example techniques for synthesizing a CT image from an MR image.
- the techniques presented in this section may be used as part of the techniques of Section III. More particularly, the synthesis techniques of this section may be employed as the synthesis techniques utilized in that section.
- the techniques of this section are non-limiting, in the sense that the techniques of Section III may employ any suitable synthesis techniques, not limited to those disclosed in this section.
- the techniques of this section are examples of synthesis techniques, but other examples may be employed in the alternative.
- the structure to be localized is a vertebra.
- the structure has been defined (i.e., “segmented” preoperatively in CT, which is referred to as “planning data"), and the 2D images in which the structure (planning data) is to be localized are intraoperative fluoroscopy images obtained on a C-arm.
- the structure could be one or more of any 3D structure(s) of interest, e.g., tumors, anatomical landmarks, vessels, nerves, etc.
- the structure(s) could be defined in any 3D or 4D image obtained either preoperatively or intraoperative ⁇ /
- the 2D image in which to localize the structure could be any form of projection image, e.g., fluoroscopy, radiography, or a "projection" MR image.
- the purpose of the method of this section is to automatically localize (i.e., identify the location of) the structure(s) defined in the 3D image directly within the intraoperative 2D image.
- the location of the 3D structure within the 2D image can be automatically computed and displayed to the surgeon.
- the invention of this section provides, as an alternative to the state of the art, a method for automatic localization of predefined 3D structures (e.g., vertebrae) in 2D fluoroscopic/radiographic images using 3D-2D registration.
- 3D/2D registration between preoperative CT and X-ray projections has been explored extensively [see Markelj P, Tomazevic D, Likar B, Pernus F., "A review of 3D/2D registration methods for image-guided interventions," Med. Image Anal. In Press, Corrected Proof], e.g., in radiation therapy, with the goal of registering between the patient and the treatment plan.
- An intensity-based method is one of the prospective approaches to improve the accuracy of 3D/2D registration by using all image information as opposed to a feature-based method.
- Two commercial radiotherapy systems CyberKnife ® Robotic Radiosurgery System (Accuray Incorporated, Sunnyvale, Calif.) [see Fu D, Kuduvalli G., "A fast, accurate, and automatic 2D-3D image registration for image-guided cranial radiosurgery," Med. Phys.
- the method comprises an algorithm named GENErative Sub-Image Synthesis (GENESIS) that is used to synthesize a CT image (or ⁇ -map) from dual echo UTE images using reference or training data.
- GENESIS GENErative Sub-Image Synthesis
- the "reference” data is distinguished from the “atlases” in atlas-based methods in the manner that while an atlas is needed to be registered to the subject, there is no registration needed between a reference and the subject.
- the provided approach matches patches between the reference images and subject images based on a statistical model and does not require any segmentation of the MR images, does not require the reference images to be registered to the subject images, and, unlike most algorithms that only utilize the MR images to determine the optimal matching patches in the reference data, the algorithm utilizes the reference CT image as well.
- the inputs for provided CT Synthesis comprises reference data that can be defined as a triplet of co-registered images ⁇ a 1 , a 2 , a 3 ⁇ having the same resolution, where a 1 and a 2 denote dual echo UTE images, where typically the first echo shows signal in bone and the second echo does not.
- the variable a 3 denotes the corresponding CT.
- the MR feature vector at the j th voxel of the reference images is the concatenation of corresponding UTE patches, denoted by y j ⁇ R 2d .
- the unobserved CT subject patches are denoted by
- P ⁇ p i ⁇
- Q ⁇ q j ⁇ .
- the subject and reference patches represent a local pattern of intensities that have been scaled to a similar intensity range. Therefore, a reference patch y j - that has a pattern of intensities that is similar to a given subject patch x, likely arises from the same distribution of tissues. In that case, the corresponding CT patch in the reference can be expected to represent an approximate CT contrast of the subject patch.
- the nearest patch may not be a close representation because the patches are relatively sparse in their high- dimensional space (e.g., a 3 x 3 x 3 patch exists in a 27-dimensional space).
- Two techniques are used to address the sparsity.
- the subject's unknown CT patch is a random vector whose mean is also a convex
- the CT patch has a covariance matrix that is unknown and different from the unknown MR patch covariance matrix.
- ⁇ t is a covariance matrix associated with the j 'th and k th reference patches.
- the weighting coefficient ct it ⁇ (0, 1) is larger when the i th subject patch is more similar to q j .
- p is a Gaussian mixture of all possible pairs of reference patches (q j and q k ).
- ⁇ is the set of all pairs of reference patch indices. From Eqn. 1, t is an element of ⁇ , and
- . Then each subject patch is assumed to follow an - class Gaussian mixture model (GMM), where each of the mixtures contains two reference patches.
- GBM - class Gaussian mixture model
- an estimated subject CT patch is a weighted average (weighted by w it ) of convex combinations (associated with a it ) of all reference CT patch-pairs (v j and v k , V j, k).
- a subject UTE patch (Xi) is a Gaussian perturbation of convex combination of an reference UTE patch-pair (yj and y k ).
- the weight depends on the similarity between the subject UTE patch (Xi) and relevant reference UTE patches (y j ,y k ), as well as the similarity between estimated subject CT patch (u i ) and reference CT patches (v j ,v k ).
- the estimation of synthetic CT patches ui is derived as follows.
- the Expectation-Maximization framework takes the perspective of an incomplete versus t complete data problem to computethe best matching patch.
- the probability of observing pj can be written as,
- f (m)it and g m)it are the expressions defined in Eqn. 3 but with ⁇ m)it . f m)il and denote the corresponding values with reference patches belonging to the i th pair, l ⁇ ⁇ , with ⁇
- the synthetic patches are obtained by the following expectation,
- FIG. 1 illustrates an example block diagram of the GENESIS-MR-CT synthesis algorithm, according to embodiments.
- GENESIS-MR-CT uses a previously acquired atlas containing two DE-UTE MR images and a CT image. All images are of the same subject and are co-registered.
- GENESIS-MR-CT then takes the two input subject DE-UTE images and applies the GENESIS synthesis algorithm to produce a synthetic CT image having the same anatomy, position, and size as the input MR images.
- FIG. 2 illustrates an example first step in the method, according to embodiments.
- the atlas is divided into overlapping patches (or feature vectors).
- the features that are used in the reference images for GENESIS are shown.
- voxel voxel values from a surrounding three-dimensional neighborhood are arranged as a feature vector. Since there are two MR images, the feature vectors from the two images are stacked into a single feature MR reference feature vector.
- FIG. 3 illustrates an example second step in the method, according to embodiments. As shown in FIG. 3, patches from the subject DE-UTE scans are extracted.
- FIG. 4 illustrates an example third step in the method, according to embodiments.
- similar patches in the DE-UTE atlas are identified and a synthetic DE-UTE patch is identified as the convex combination of two similar patches.
- the two reference patches (patches 2 405BG and 3 405C) are combined by convex combination (yielding the dotted patch 420) and associating a subject patch (the circle 410) to the convex combination of reference patches 405A, 405B and 405C.
- FIG. 4 illustrates an example third step in the method, according to embodiments.
- similar patches in the DE-UTE atlas are identified and a synthetic DE-UTE patch is identified as the convex combination of two similar patches.
- the two reference patches (patches 2 405BG and 3 405C) are combined by convex combination (yielding the dotted patch 420) and associating a subject patch (the circle 410) to the convex combination of reference patches 405A, 405B and 405C.
- FIG. 5 illustrates an example fourth step in the method, according to embodiments.
- corresponding patches in the CT atlas are used to synthesize a CT patch.
- the center voxel within the synthesized patch is used as the synthetic CT intensity.
- the above process can be repeated for each voxel in the source image, thereby synthesizing a complete CT image.
- a convex combination of reference patches (2 505B and 3 505C) is then used to synthesize a CT patch (the dotted circle 510).
- the PET data acquired on the PET/MRI and the CT data acquired on the PET/CT were used. Attenuation correction of the PET data was performed using four different methods of generating the attenuation map ( ⁇ -map). Images from additional 30 subjects who had MRI with UTE sequences, 29 subjects with Dixon MRI, and 31 subjects with CTs, were also used for comparison later.
- MR and PET images were acquired on a 3T Siemens Biograph mMR.
- PET was acquired for five minutes approximately 90 minutes after injection of ⁇ 370 MBq 18F fluorodeoxyglucose.
- Attenuation maps were inserted in the original MR-AC DICOM files and imported into a specialized computer workstation for PET retrospective image reconstruction.
- the image reconstruction was performed using a 3D ordered-subset expectation-maximization algorithm, as disclosed in Hudson HM, Larkin RS. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans Med Imaging. 1994;13:601-609, which is hereby incorporated by reference in its entirety, with 3 iterations and 21 subsets on a 344 x 344 x 127 matrix with a 4-mm Gaussian filter.
- CT images were acquired on a Siemens Biographl28 PET/CT scanner with a tube voltage of 120kVp, with dimension 512 x 512 x 149 and 0.58 x 0.58 x 1.5 mm 3 resolution. CT images were rigidly registered to the corresponding MRI from the same subject. Real and synthetic CT images were transformed to ⁇ -maps (unit cm " ) using the following criteria (11),
- each ⁇ -map or CT image was affine registered to a template CT image.
- the neck region was manually defined on the template by identifying an axial slice that corresponds to the neck and head boundary on the template.
- the neck regions were removed from each of the images using the corresponding axial slice.
- Bone and air volumes were computed using a threshold on the CT images (bone threshold 300 HU, air threshold - 1000 HU), or directly from the ⁇ maps for the Siemens generated results.
- Table 1 CT based ⁇ -maps are compared with GENESIS ⁇ -map, Siemens Dixon, Siemens UTE, and registration based ⁇ -map on 5 subjects (shown in each row). Correlation (p) and peak signal-to-noise-ratio (PSNR) in dB are chosen as error metric, assuming the CT ⁇ -map as the truth. GENESIS produces higher correlation and PSNR than other three methods on all 5 subjects.
- FIG. 8 shows percent air and bone fractions with respect to the relevant subject pool.
- Siemens UTE ⁇ -maps, GENESIS and registration has 30 subjects, Siemens Dixon contains 28 subjects, and true CT contains 31 subjects. Since Dixon images do not provide bone segmentation, they are only used for air fraction comparison.
- GENESIS provides similar (median 16%) percent of bone fractions to the original CT (median 17%).
- FIG. 9B shows the synthetic CT results on a subject comparing four methods. Since deformable registration is inaccurate, errors are seen near the eyes and nasal cavity (arrow 905 in FIG. 9B), while the GENESIS ⁇ -map is visually closer to the truth. It also has better bone to soft tissue discrimination than the Siemens UTE based ⁇ -maps.
- Reconstructed PET images from the five ⁇ -maps are shown in FIG. 9C. Assuming the true CT reconstructed PET as the ground truth, GENESIS provides the closest reconstructed images to the truth compared to the other three PET reconstructed images, also seen from the difference images in the bottom row FIG. 9D. Visually, GENESIS produces a very similar reconstructed images inside the brain.
- Table 2 shows comparison between four methods with CT reconstructed PET images, with respect to correlation, PSNR, regression slopes and R 2 .
- Slopes of the linear regression should ideally be unity.
- slopes across the four subjects are closer to unity than the other three methods.
- R 2 for all five subjects are closer to unity than the other three methods.
- PSNR for GENESIS is smallest among the five subjects. This can be attributed to the fact that subtle lesions near the left ventricles were not synthesized in GENESIS, although the large lesion was synthesized well (arrow 705 in FIG. 7B).
- Table 2 Reconstructed PET images from CT based ⁇ -maps are compared with PET images from M R based ⁇ -maps on 5 subjects via correlation, PSN R (in dB), R2 and linear regression slope from the scatter plots (such as FIG. 10), assuming CT based PET as the ground truth.
- Table 3 For a subject, 5 synthetic ⁇ -maps are generated using 5 different references, and are compared with CT based ⁇ -map using correlation and PSN . Corresponding PET reconstructions are also compared with the true CT based PET.
- a framework is provided in this section to synthesize CT images using dual echo UTE images from a reference.
- GENESIS does not employ deformable registration, which can sometimes suffer from poor performance when the atlas and subject images are geometrically dissimilar. Patch matching can also be susceptible to suboptimal performance if the reference data is not sufficiently rich.
- GENESIS enriches the reference data by considering convex combinations of patches sampled from Gaussian mixture distributions.
- deformable registrations usually take significant time as a preprocessing step ( ⁇ 1 hour with ANTS (15)).
- the inventors have previously presented a CT synthesis method from a single T 1 -w image for the sole purpose of image registration ⁇ 18,19).
- standard T1 - w MR images do not have sufficient contrast to distinguish bone from air. Therefore the synthetic CT images were not as accurate the ones synthesized using dual-echo UTE.
- the synthesis application was aimed to improve registration of MR and CT brain images, the imperfections in bone regions did not substantially impact the results.
- bone is critically important and this previous method would not be well suited.
- the present approach provides an improvement to MR- CT registration.
- FIG. 12 there is seen a system for carrying out a method which includes a preoperative step 1201, wherein a CT image of another volumetric image is taken.
- Projection data is derived from this image by computing.
- step 1202 includes an acquisition of a 2D X-Ray image.
- FIG. 13 for showing an example of the proposed workflow for localization and identification of a structure in a projection image.
- a preoperative 3D image provides the basis for 3D-2D registration.
- the preoperative image could be a CT image (which is preferred) or another volumetric image modality from which projection data may be computed.
- a preoperatively acquired diagnostic CT represented in Hounsfield Units (HU)
- HU Hounsfield Units
- ⁇ is the linear attenuation coefficient of the voxel and ⁇ water is the coefficient of water at the X-ray energy which was used for the CT scanning.
- the target anatomical structure is segmented.
- the segmentation can be done by either: i) delineating target anatomy manually or by any variety of automatic or semiautomatic segmentation approaches; ii) identifying simply the point within the target structure that is to be projected in a (e.g., the anterior-posterior (AP)) projection image at the approximate center of the projected target anatomy.
- This segmentation step is depicted as preprocessing in a step 1312 in FIG. 13.
- lo is defined by using the intensity of a pixel in the area with no visible object.
- the estimate does not need to be accurate.
- the approximate pose could be induced from a surgical protocol, which usually indicates the position of the patient on the operating table with respect to the imager (e.g. supine position, prone position, etc.).
- a digitally reconstructed radiograph is generated, for example by using graphical processing unit (GPU)-acceleration, as is seen from step 1330 in FIG. 13.
- GPU graphical processing unit
- the generated DRR and fixed image preprocessed X-ray projection images are compared by a similarity measure, e.g., mutual information (Ml) or (inverse) sum-of- squa red-differences (SSD) between the two 2D images.
- a similarity measure e.g., mutual information (Ml) or (inverse) sum-of- squa red-differences (SSD) between the two 2D images.
- Ml mutual information
- SSD sum-of- squa red-differences
- a radiation therapy linear accelerator gantry robotic radiotherapy device, or radiotherapy simulator.
- target structures/anatomy are not confined to the spine. Such could be equally useful in other areas where intraoperative x-ray images are used in a clinical routine for "searching" to localize a structure. This includes guidance for:
- implanted devices visible in preoperative images, e.g., stents, catheters, implants, etc.;
- an endoscopic surgery e.g. NOTES.
- This section provides a method for automatically localizing structures of interest in intraoperative 2D images (e.g., radiographs or fluoroscopy), starting from preoperative 3D images (e.g., preoperative MRI).
- An MR-CT synthetic image is any CT-like image that is created from one or MR images of the same subject.
- the techniques disclosed in Section I may be used to generate an MR-CT synthetic image.
- Implementations according to this section operate in a manner similar to that disclosed in Section II, above, with a difference being that the present technique allows for registration based upon a preoperative MRI (whereas Section II uses a preoperative CT). Implementations according to this section can extend the capability of the technique of Section II to include scenarios in which only a preoperative MRI is available.
- implementations according to this section operate using a method for MR-to-CT "synthesis," such as, by way of non-limiting example, any of those described in Section I, above.
- a method for MR-to-CT "synthesis” such as, by way of non-limiting example, any of those described in Section I, above.
- the present method can integrate MR-CT synthesis with LevelCheck (CT-to-Projection registration, described in Section II) to allow the same functionality as LevelCheck but operating on preoperative MRI and without the requirement of a preoperative CT.
- LevelCheck CT-to-Projection registration, described in Section II
- other MR-CT synthesis approaches can also be used from which the present teachings follow.
- the present MR- LevelCheck method first computes a MR-CT synthesis using any suitable technique, and then computes a CT-to-Projection registration.
- the method therefore allows automatic localization of structures of interest within a 2D projection image.
- labels e.g., point, line, contour, or volume data defined in the preoperative 3D image
- both the LevelCheck method of Section II and the present MR-LevelCheck method of this section are general as to what structure defined in the 3D image is registered onto the radiograph. That structure could be a point label, a linear trajectory, a curve, surface, volume, etc. Any geometric structure defined in the 3D image can be registered and overlaid on the intraoperative radiograph, depending on the application.
- aspects of the present section provide for the combination of MR-CT Synthesis with a LevelCheck method for automatically localizing structures of interest in a 2D projection image. This combination extends the capability of LevelCheck (which uses a preoperative CT) to operation based upon a preoperative MRI.
- FIG. 14 schematically shows a related LevelCheck method that uses a 3D CT image.
- FIG. 14 depicts a method as disclosed in Section II, above.
- Block 1402 depicts a preoperative 3D CT image, along with label data, as described above in Section II.
- Block 1404 depicts an iteratively-constructed digitally reconstructed radiograph (DRR).
- the DRR of block 1404 is constructed using the preoperative 3D CT image of block 1402, along with an output of a comparison process depicted at block 1410, used for optimization as described in detail above in Section II.
- the 3D-2D registration is represented by block 1408, which accepts as inputs the DRR of block 1404 as well as the intraoperative X-ray projection of block 1406. This process is described above in Section II.
- Block 1410 receives data from block 1410 and outputs an X-ray projection with registered planning data overlayed.
- FIG. 15 schematically shows an example MR-LevelCheck method that uses a 3D MR image, according to embodiments.
- the method of FIG. 15 is similar to that of FIG. 14, except that the method accepts as input an MR image instead of a CT image.
- block 1502 represents a preoperative 3D MR image and associated planning data that is used as an input to an MR-CT synthesis process, e.g., as described above in Section II.
- the MR-CT synthesis process outputs synthesized CT data 1504 as described in detail herein in Section I.
- Such synthesized data is input into an iterative process that generates a DRR, represented by block 1510.
- the iterative process outputs DRR data to a 3D-2D registration process represented by block 1508, which also receives as an input an intraoperative X-ray projection represented by block 1506.
- An example registration process is described above in Section II.
- the 3D-2D registration process outputs to a comparison process, represented by block 1512, which compares a gradient of the X-ray to a current DRR in the iteration as described in Section II above. Once the comparison is acceptable, as described above in Section II, the process outputs an X-ray projection with registered planning data overlaid thereon, as represented by block 1514.
- FIG. 16 schematically shows an example 3D-2D registration method, according to embodiments.
- FIG. 16 depicts in part example actions of blocks 1510, 1508, and 1512, among others, of FIG. 15.
- Alternative variations of the 3D-2D registration include, but are not limited to, variations in the optimization method (CMA-ES) and similarity metric (gradient information (“Gl”), normalized gradient information (“NGI”), and gradient correlation (“GC”) ).
- This particular implementation of 3D-2D registration includes a process for fast calculation of a digitally reconstructed radiograph (DRR) 1606 from either a CT 1604 (as in Section II) or from a CT "synthesized” from an MRI (as in, e.g., block 1504 of FIG.
- DRR digitally reconstructed radiograph
- Similarity function 1608 accepts as inputs radiograph 1602 and DRR 1606.
- the implementation of FIG. 16 also includes an optimizer 1610 that uses, for example, the CMA-ES method. Optimization method 1610 outputs data to transformation and source position module 1612, which processes data as described herein in Section II and provides it to update DRR 1606.
- optimizer 1610 uses, for example, the CMA-ES method. Optimization method 1610 outputs data to transformation and source position module 1612, which processes data as described herein in Section II and provides it to update DRR 1606.
- Implementations of the present MR-LevelCheck method can allow operation based upon a 3D MRI, as opposed to a 3D CT as in the original LevelCheck. For some spine surgeons, for example, this would be helpful in that their standard of care involves a preoperative MRI (but not CT).
- the present method and device can use any type of MR image, i.e., pulse sequence, for which a MR-to-CT synthesis can be performed.
- FIG. 17 is a flowchart of a method according to some embodiments.
- the method obtains an MR image.
- the MR image may be obtained using any conventional source of MR images, e.g., directly from an MR apparatus or over a network from a stored data source.
- the method obtains a 2D image, e.g., a conventional X-ray, as disclosed herein.
- the method performs CT image synthesis from the MR image, and at block 1708, the method computes 3D-2D registration.
- Example techniques for performing the actions of these blocks are described in detail above in Section II;
- the method outputs a result.
- the output may include providing a common representation of the registered 3D and 2D images, for example.
- FIG. 18 illustrates an example computer system that can, in part, implement the methods of FIGS. 14-17, according to some embodiments.
- the computer system 1800 can be implemented as various computer systems, such as a personal computer, a server, a workstation, an embedded system, a distributed system, a multifunction device, or a combination thereof.
- the processes described herein can be implemented in part as a computer program.
- the computer program can exist in a variety of forms both active and inactive.
- the computer program can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats, firmware program(s), or hardware description language (HDL) files. Any of the above can be embodied on a computer readable medium, which include storage devices, in compressed or uncompressed form. Examples of the components that can be included in system 1800 will now be described.
- the example computer system can bring the preoperative MRI from the hospital information system (HIS / PACS, which is outside the operating room for surgery, interventional suite for interventional radiology, and treatment room for radiotherapy) or from an intraoperative MRI (which is inside the operating room).
- the example system can be operable for automatically reading that MRI into the MR-LevelCheck system.
- the example system can be internal to the MR-LevelCheck system for computing a MR-to-CT synthesis.
- the example system can be operable for defining the "labels" points, lines, surfaces, volumes, etc. (or vertebral labels, of course) on the MRI.
- the example system can be operable for computing the 3D-2D registration of FIG. 16 operating upon the "synthesized" CT.
- the example system can be operable for automatically reading the x-ray projection image (radiograph or fluoroscopy) from the intraoperative x-ray imaging system into the system for MR-LevelCheck.
- system 1800 can include at least one processor 1802,
- input/output devices 1816 may include a keyboard, a pointing device (e.g., a mouse, a touchpad, and the like), a display adapter 1819 and display 1820, main memory 1806, network adapter 1822, removable storage device 1812, and a storage device 1808 including hard disk drive 1810 and removable storage device 1812.
- Storage device 1808 can comprise, for example, RAM, ROM, flash memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- System 1800 can also be provided with additional input/output devices, such as a printer (not shown).
- system 1800 communicates through a system bus 1804 or similar architecture.
- system 1800 can include an operating system (OS) that resides in memory 1806 during operation.
- OS operating system
- system 1800 can include multiple processors.
- system 1800 can include multiple copies of the same processor.
- system 1800 can include a heterogeneous mix of various types of processors.
- system 1800 can use one processor as a primary processor and other processors as co-processors.
- system 1800 can include one or more multi-core processors and one or more single core processors.
- system 1800 can include any number of execution cores across a set of processors.
- other components and peripherals can be included in system 1800.
- Main memory 1806 serves as a primary storage area of system 1800 and holds data that is actively used by applications running on processor 1802.
- applications are software programs that each contains a set of computer instructions for instructing system 1800 to perform a set of specific tasks during runtime, and that the term "applications" can be used interchangeably with application software, application programs, device drivers, and/or programs in accordance with embodiments of the present teachings.
- Memory 1804 can be implemented as a random access memory or other forms of memory as described below, which are well known to those skilled in the art.
- OS is an integrated collection of routines and instructions that are responsible for the direct control and management of hardware in system 1800 and system operations. Additionally, OS provides a foundation upon which to run application software and device drivers. For example, OS can perform services, such as resource allocation, scheduling, input/output control, and memory management. OS can be predominantly software, but can also contain partial or complete hardware implementations and firmware.
- WINDOWS e.g., WINDOWS CE, WINDOWS NT, WINDOWS 2000, WINDOWS XP, WINDOWS VISTA, WINDOWS 7, and WINDOWS 8
- MAC OS e.g., Arch Linux, Chromium OS, Debian including Ubuntu, Fedora including Red Hat Enterprise Linux, Gentoo, openSUSE, and Slackware
- UNIX ORACLE SOLARIS, OPEN VMS, and IBM AIX.
- Variations in the elements of the 3D-2D registration method of FIG. 16 are also contemplated.
- the system could operate using any number of variations on: the forward projector that computes the DRR, where variations can include the "Siddon algorithm” or other forms of forward projector for computing a DRR; similarity metric, including Gl, NGI, GC, or variations (mean-square error, mutual information, normalized cross correlation, etc.); optimization method (including CMA-ES (shown here) or other methods, such as simplex, gradient descent, etc.).
- processor 602 a general purpose processor
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a general-purpose processor can be a microprocessor, but, in the alternative, the processor can be any conventional processor, controller, microcontroller, or state machine.
- a processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- the functions described can be implemented in hardware, software, firmware, or any combination thereof.
- the techniques described herein can be implemented with modules ⁇ e.g., procedures, functions, subprograms, programs, routines, subroutines, modules, software packages, classes, and so on) that perform the functions described herein.
- a module can be coupled to another module or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
- Information, arguments, parameters, data, or the like can be passed, forwarded, or transmitted using any suitable means including memory sharing, message passing, token passing, network transmission, and the like.
- the software codes can be stored in memory units and executed by processors.
- the memory unit can be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
- Computer-readable media includes both tangible, non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
- a storage media can be any available tangible, non-transitory media that can be accessed by a computer.
- tangible, non- transitory computer-readable media can comprise RAM, ROM, flash memory, EEPROM, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- Disk and disc includes CD, laser disc, optical disc, DVD, floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
- any connection is properly termed a computer-readable medium.
- the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
- the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Combinations of the above should also be included within the scope of computer-readable media.
- a "quality metric" can be generated and displayed that evaluates the quality of registration.
- a Similarity Metric e.g., Gl
- Gl Similarity Metric
- an ordinal scale that in effect conveys a "green” "yellow” or “red” light to the surgeon, meaning, respectively: (a) green (high Gl value, or other metric) - this registration was easy, and there is high confidence in the result; (b) yellow (midvalue Gl or other) -this registration was somewhat difficult, and the result may be worth double-checking; (c) red (low value Gl or other) - this registration was difficult, there was little consistent
- the result may be erroneous; the user should double-check, reinitialize, run again, and/or check by other means.
- the numerical values as stated for the parameter can take on negative values.
- the example value of range stated as "less that 10" can assume negative values, e.g. - 1, -2, -3, - 10, -20, -30, etc.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- General Health & Medical Sciences (AREA)
- High Energy & Nuclear Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Neurology (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
L'invention concerne un procédé pour l'enregistrement automatique d'images médicales, le procédé pouvant comprendre l'obtention d'une ou de plusieurs images de résonance magnétique {MR} en trois dimensions d'une zone cible; l'obtention d'une image {20} à deux dimensions d'au moins une partie de la zone cible; le calcul d'une synthèse de MR-tomographie assistée par ordinateur (CT), à partir de l'image de MR en 3D qui est obtenue; le calcul d'un enregistrement 3D-2D sur la base de la synthèse MR-CT et de l'image 2D qui est obtenue; et la livraison en sortie d'un résultat de l'enregistrement 3D-2D.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201461996655P | 2014-05-14 | 2014-05-14 | |
| US61/996,655 | 2014-05-14 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2015175848A1 true WO2015175848A1 (fr) | 2015-11-19 |
Family
ID=54480720
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2015/030913 Ceased WO2015175848A1 (fr) | 2014-05-14 | 2015-05-14 | Système et procédé de localisation automatique de structures dans des images de projection |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2015175848A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018021652A1 (fr) * | 2016-07-28 | 2018-02-01 | 가톨릭대학교 산학협력단 | Procédé d'estimation de dose de rayonnement d'une image composite basée sur une irm à l'aide d'une table de consultation |
| CN111528889A (zh) * | 2020-04-30 | 2020-08-14 | 赤峰学院附属医院 | 颅颌面状态的分析方法及装置、电子设备 |
| WO2022120018A1 (fr) * | 2020-12-02 | 2022-06-09 | Acrew Imaging, Inc. | Procédé et appareil de fusion d'images multimodales à des images fluoroscopiques |
| CN116596812A (zh) * | 2023-03-06 | 2023-08-15 | 重庆理工大学 | 一种低剂量ct图像与x射线热声图像配准融合的方法 |
| TWI856893B (zh) * | 2023-12-01 | 2024-09-21 | 財團法人金屬工業研究發展中心 | 手術影像的對位方法與其系統 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050065424A1 (en) * | 2003-06-06 | 2005-03-24 | Ge Medical Systems Information Technologies, Inc. | Method and system for volumemetric navigation supporting radiological reading in medical imaging systems |
| KR100529119B1 (ko) * | 2004-12-08 | 2005-11-15 | 주식회사 사이버메드 | 3차원 의료용 이미지의 합성을 위한 이미지 정합 방법 |
| US20110251454A1 (en) * | 2008-11-21 | 2011-10-13 | Mayo Foundation For Medical Education And Research | Colonoscopy Tracking and Evaluation System |
| US20130331697A1 (en) * | 2012-06-11 | 2013-12-12 | Samsung Medison Co., Ltd. | Method and apparatus for displaying three-dimensional ultrasonic image and two-dimensional ultrasonic image |
| US20140037165A1 (en) * | 2011-02-10 | 2014-02-06 | Timothy King | Multi-Source Medical Display |
-
2015
- 2015-05-14 WO PCT/US2015/030913 patent/WO2015175848A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050065424A1 (en) * | 2003-06-06 | 2005-03-24 | Ge Medical Systems Information Technologies, Inc. | Method and system for volumemetric navigation supporting radiological reading in medical imaging systems |
| KR100529119B1 (ko) * | 2004-12-08 | 2005-11-15 | 주식회사 사이버메드 | 3차원 의료용 이미지의 합성을 위한 이미지 정합 방법 |
| US20110251454A1 (en) * | 2008-11-21 | 2011-10-13 | Mayo Foundation For Medical Education And Research | Colonoscopy Tracking and Evaluation System |
| US20140037165A1 (en) * | 2011-02-10 | 2014-02-06 | Timothy King | Multi-Source Medical Display |
| US20130331697A1 (en) * | 2012-06-11 | 2013-12-12 | Samsung Medison Co., Ltd. | Method and apparatus for displaying three-dimensional ultrasonic image and two-dimensional ultrasonic image |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018021652A1 (fr) * | 2016-07-28 | 2018-02-01 | 가톨릭대학교 산학협력단 | Procédé d'estimation de dose de rayonnement d'une image composite basée sur une irm à l'aide d'une table de consultation |
| US10602956B2 (en) | 2016-07-28 | 2020-03-31 | Catholic University Industry-Academic Cooperation Foundation | Radiation dose estimating method of MRI based composite image using lookup table |
| CN111528889A (zh) * | 2020-04-30 | 2020-08-14 | 赤峰学院附属医院 | 颅颌面状态的分析方法及装置、电子设备 |
| CN111528889B (zh) * | 2020-04-30 | 2021-05-18 | 赤峰学院附属医院 | 颅颌面状态的分析方法及装置、电子设备 |
| WO2022120018A1 (fr) * | 2020-12-02 | 2022-06-09 | Acrew Imaging, Inc. | Procédé et appareil de fusion d'images multimodales à des images fluoroscopiques |
| CN116596812A (zh) * | 2023-03-06 | 2023-08-15 | 重庆理工大学 | 一种低剂量ct图像与x射线热声图像配准融合的方法 |
| TWI856893B (zh) * | 2023-12-01 | 2024-09-21 | 財團法人金屬工業研究發展中心 | 手術影像的對位方法與其系統 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Brock et al. | Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132 | |
| US10149987B2 (en) | Method and system for generating synthetic electron density information for dose calculations based on MRI | |
| US7817836B2 (en) | Methods for volumetric contouring with expert guidance | |
| JP5759446B2 (ja) | 解剖学的特徴を輪郭抽出するシステム、作動方法及びコンピュータ可読媒体 | |
| EP2252204B1 (fr) | Substitut de tomodensitométrie (ct) par auto-segmentation d'images par résonnance magnétique | |
| EP2807635B1 (fr) | Détection automatique d'un implant à partir d'artéfacts d'images | |
| US8942455B2 (en) | 2D/3D image registration method | |
| EP3268931B1 (fr) | Procédé et appareil d'évaluation d'enregistrement d'images | |
| US11284846B2 (en) | Method for localization and identification of structures in projection images | |
| Tomazevic et al. | 3-D/2-D registration by integrating 2-D information in 3-D | |
| De Silva et al. | Registration of MRI to intraoperative radiographs for target localization in spinal interventions | |
| CN114830181B (zh) | 用于人工智能的患者图像的解剖加密 | |
| EP3011358B1 (fr) | Segmentation de corticale à partir de données de dixon rm | |
| WO2015175848A1 (fr) | Système et procédé de localisation automatique de structures dans des images de projection | |
| US20230169676A1 (en) | System and Method for Identifying Feature in an Image of a Subject | |
| CN108430376B (zh) | 提供投影数据集 | |
| US20210391061A1 (en) | Compartmentalized dynamic atlas | |
| CN117115221A (zh) | 一种肺肿瘤位置和形态实时估计方法、系统和存储介质 | |
| Dowling et al. | Image synthesis for MRI-only radiotherapy treatment planning | |
| Kall | Image reconstruction and fusion | |
| Huang et al. | Multi-modality registration of preoperative MR and intraoperative long-length tomosynthesis using GAN synthesis and 3D-2D registration | |
| Vijayan | ADVANCED INTRAOPERATIVE IMAGE REGISTRATION FOR PLANNING AND GUIDANCE OF ROBOT-ASSISTED SURGERY | |
| de Oliveira et al. | Image Registration Methods for Patient-Specific Virtual Physiological Human Models. | |
| Li et al. | Volumetric Image Registration of Multi-modality Images of CT, MRI and PET | |
| Imaging | 24 Image Reconstruction and Fusion |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15792813 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15792813 Country of ref document: EP Kind code of ref document: A1 |