[go: up one dir, main page]

WO2016033458A1 - Restauration de la qualité d'image de tomographie par émission de positons (tep) à dose réduite de radiotraceur en utilisant la pet et la résonance magnétique (rm) combinées - Google Patents

Restauration de la qualité d'image de tomographie par émission de positons (tep) à dose réduite de radiotraceur en utilisant la pet et la résonance magnétique (rm) combinées Download PDF

Info

Publication number
WO2016033458A1
WO2016033458A1 PCT/US2015/047425 US2015047425W WO2016033458A1 WO 2016033458 A1 WO2016033458 A1 WO 2016033458A1 US 2015047425 W US2015047425 W US 2015047425W WO 2016033458 A1 WO2016033458 A1 WO 2016033458A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dose
pet
low
dose pet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2015/047425
Other languages
English (en)
Inventor
Weili Lin
Dinggang Shen
Yaozong GAO
David Lalush
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of North Carolina at Chapel Hill
North Carolina State University
Original Assignee
University of North Carolina at Chapel Hill
North Carolina State University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of North Carolina at Chapel Hill, North Carolina State University filed Critical University of North Carolina at Chapel Hill
Publication of WO2016033458A1 publication Critical patent/WO2016033458A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4417Constructional features of apparatus for radiation diagnosis related to combined acquisition of different diagnostic modalities
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/501Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/4808Multimodal MR, e.g. MR combined with positron emission tomography [PET], MR combined with ultrasound or MR combined with computed tomography [CT]
    • G01R33/481MR combined with positron emission tomography [PET] or single photon emission computed tomography [SPECT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/542Control of apparatus or devices for radiation diagnosis involving control of exposure
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • the subject matter herein generally relates to positron emission tomography (PET) images, and more particularly, to methods, systems, and computer readable media for predicting, estimating, and/or generating high (diagnostic) quality PET images using PET images acquired with a dose substantially lower than the widely used clinical dose (low-dose PET) and magnetic resonance imaging (MRI) acquired from the same subject.
  • PET positron emission tomography
  • MRI magnetic resonance imaging
  • PET Positron emission tomography
  • PSNR peak signal-to-noise ratio
  • the image quality of PET is largely determined by two factors: the dosage of radionuclide (tracer) injected within a patient and imaging acquisition time. Although the latter factor could be easily increased, a long data acquisition time could lead to more motion related artifacts and is not applicable for radiotracers with a short half-life. The former factor can be easily understood as a higher dose generates more detected events and thus obtains images with higher PSNR. However, due to concerns about internal radiation exposure in patients, efforts to reduce currently used clinical dose while preserving PET image quality without compromising the ability to make accurate diagnosis have been actively pursued by scientists.
  • the image (A) on the left is an example of a low-dose PET image
  • a corresponding standard clinical dose PET image (B) is on the right.
  • both the quality and PSNR of a low-dose PET image is inferior to that of a standard clinical dose PET image.
  • the difference is visibly noticeable, for example, in the contrast between image (A) and image (B).
  • the quality of the low-dose PET image is further decreased due to various factors during the process of acquisition and transmission. Consequently, the tracer dosage and process variability affect the accurate diagnosis of diseases/disorders.
  • a higher dosage of radionuclide (tracer) needs to be injected into the patient's body.
  • MR magnetic resonance
  • MRI MR imaging
  • a combined PET/MRI system provides the benefits (i.e., by scanning both low-dose PET and MRI images simultaneously) for generating standard clinical dose PET images using a combination of low-dose PET and MRI images to predict clinical dose PET values.
  • PET positron emission tomography
  • An exemplary method for predicting and/or generating an estimated high-dose PET image without injecting a high-dose radiotracer to a patient PET scan includes extracting appearance features from the at least one magnetic resonance (MR) image, extracting appearance features from at least one low-dose PET image, and generating a predicted (estimated) high-dose PET image using the appearance features of the at least one MR image and at least one low-dose PET image.
  • MR magnetic resonance
  • the system includes a hardware computing processor and a high-dose PET Prediction Module (HDPPM) implemented using the processor.
  • the HDPPM is configured to extract appearance features from each of a MR image and at least one corresponding low-dose PET image and generate a high-dose PET image using the appearance features of the at least one MR image and the at least one low-dose PET image.
  • a non-transitory computer readable medium has stored thereon executable instructions that when executed by a processor of a computer control the computer to perform steps.
  • the steps include extracting appearance features from at least one MR image, extracting appearance features from at least one low-dose PET image, and generating an estimated high-dose PET image using the appearance features of the at least one MR image and the at least one low-dose PET image.
  • the subject matter described herein can be implemented in software in combination with hardware and/or firmware.
  • the subject matter described herein can be implemented in software executed by one or more processors.
  • the subject matter described herein may be implemented using a non-transitory computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps.
  • Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory devices, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits.
  • a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
  • module refers to hardware, firmware, or software in combination with hardware and/or firmware for implementing features described herein.
  • low-dose and high-dose refer to the dosage (e.g., quantity, amount, or measurement) of a radionuclide or radioactive tracer, injected into a patient prior to PET imaging.
  • the standard, high-dose is approximately 5 millicuries (mCi).
  • body (i.e., non-brain) 18F- FDG imaging the standard, high-dose is approximately 10 mCi.
  • a target or low-dose is any amount less than approximately 2.5 mCi, which is any amount less than or approximately 1/2 of the standard, high-dose.
  • standard-dose “clinical dose”, and “high-dose” as used herein are synonymous, and refer to the conventional dosage amounts clinically accepted by the medical community, which generate more detected events and obtain PET images having a higher PSNR.
  • FIG. 1 illustrates exemplary low-dose and high-dose positron emission tomography (PET) images
  • Figure 2 is a schematic diagram illustrating methods, systems, and computer readable media for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
  • Figure 3 is a schematic diagram illustrating an exemplary method for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
  • Figure 4 is another exemplary method for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
  • Figures 5 to 9 illustrate graphical information regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
  • Figure 10 is an illustration of prediction results regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
  • Figures 1 1 and 12 illustrate graphical information regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
  • Figure 13 is a block diagram of an exemplary system for predicting high-dose PET values for generating an estimated high-dose PET images according to an embodiment of the subject matter described herein;
  • Figure 14 is an example of dynamic image acquisition acquired via methods, systems, and computer readable media described herein;
  • Figure 15 is a block diagram illustrating an exemplary method for predicting high-dose PET values for generating an estimated high-dose PET images according to an embodiment of the subject matter described herein;
  • Figure 16 is a schematic block diagram illustrating an overview of a model learning methodology
  • Figure 17 is an overview of a constructed regression forest (RF); and Figure 18 is a schematic block diagram of and a specific example of a decision tree for the regression model used in predicting and generating estimated high-dose PET images.
  • RF constructed regression forest
  • a regression forest (RF) based framework is used in predicting and generating an estimate of a standard, high-dose PET image by using both low-dose PET and MRI images.
  • the prediction method includes two approaches. One approach includes prediction of a standard, high-dose PET image by tissue-specific regression forest (RF) models with the image appearance features extracted from both low-dose PET and MRI images. Another approach includes incremental refinement of a predicted standard-dose PET image by iteratively estimating the image difference between the current prediction and the target standard-dose PET. By incrementally adding the estimated image difference towards the target standard-dose PET, methods and systems described herein are able to gradually improve the quality of predicted standard, high-dose PET.
  • low-dose and high-dose refer to the dosage (e.g., quantity, amount, or measurement) of radionuclide or radioactive tracer, injected into a patient prior to and/or during PET imaging.
  • Methods, systems, and computer readable media herein can minimize radiation exposure in patients, by predicting higher quality high-dose PET image values and, thereby, generating an estimated high-dose PET image from a combination of low-dose PET and MR images.
  • Standard-dose and “high-dose” as used herein are synonymous, and refer to the conventional dosage amounts clinically accepted by the medical community, which generate sufficient detected events and obtain clinically acceptable PET images having a sufficiently high PSNR.
  • Standard, high-dose images are obtained as a result of injecting a patient with the standard, high-dose quantity or medically accepted amount of tracers.
  • the standard, high-dose is approximately 5 millicuries (mCi).
  • body (i.e., non-brain) 18F-FDG imaging the standard, high-dose is approximately 10 mCi.
  • Such amounts may also refer to any other clinically accepted dose calculated in view of a patient's body weight and/or a body mass index (BMI).
  • BMI body mass index
  • target-dose and low-dose are synonymous, and refer to a minimized dose of tracer injected into a patient for PET imaging.
  • the low-dose image is then used, in part, to predict a high- dose PET image.
  • the target, low-dose amount of tracer injected into a patient is advantageous in minimizing radiation exposure.
  • the target, low-dose or dosage amount of tracer injected into a patient is anywhere from approximately 1/2 to 1/10, or less, of the standard or high- dose (i.e., at least less than 50% of the standard, high-dose). That is, for brain imaging, the target, low-dose is approximately 2.5 mCi or less, approximately 1.25 mCi or less, or approximately 0.5 mCi or less.
  • the target, low-dose or dosage amount of tracer injected into a patient is also at least less than 50% of the standard, high- dose.
  • the target, low-dose is approximately 5 mCi or less, approximately 2.5 mCi or less, or approximately 1 mCi or less. While the above examples focus on the use of FDG, it should be noted that the approach may be generalized to any other radiotracers.
  • voxel is defined as a value or position on a regular grid in three-dimensional (3D) space. Voxel is a combination of the terms “volume” and "pixel” where pixel is a combination of "picture” and "element”.
  • a voxel is analogous to a texel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). The position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image) in an image.
  • Model 1 a model, generally designated "Model 1 " is generated.
  • Model 1 may be determined or generated from data obtained from a plurality of MRI, low-dose PET images, and high, standard-dose PET images.
  • a prediction model can be calculated such that the high, standard-dose PET images can be predicted using the model built or trained from the data set.
  • tissue-specific models can be built using low-dose PET and MRI images.
  • the first model or "Model 1 " can be refined, and iteratively updated into a refined model designated "Model 2" to "Model N" (e.g., where N is a whole number integer > 2) via estimating the image difference between the predicted and actual high-dose PET image.
  • PET Positron emission tomography
  • PET is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in a human body.
  • PET has been widely used in various clinical applications, such as diagnosis of tumors, diseases, and diffuse brain disorders.
  • High quality PET images play an essential role in diagnosing diseases/disorders and assessing the response to therapy.
  • a standard or high-dose radionuclide tracer
  • a standard or high-dose radionuclide needs to be injected into the patient's living body.
  • the risk of radiation exposure increases.
  • researchers have attempted to acquire low- dose PET images, as opposed of high-dose images, to minimize the radiation risk, at the cost of reduced image quality or lengthening imaging acquisition time.
  • a regression forest (RF) based framework is used for generating an estimated standard or high-dose PET image by using values predicted from a low-dose PET image and its corresponding magnetic resonance imaging (MRI) image.
  • MRI magnetic resonance imaging
  • Exemplary embodiments herein include prediction of standard-dose PET images of brain tissue using simultaneously acquired low-dose PET/MIR images. Prediction of standard- dose PET images for any non-brain tissue (e.g., body tissue) can also be provided.
  • Systems and methods herein are not limited to predicting standard-dose PET images of the brain, but rather, systems and methods herein can be used to predict standard-dose PET images of any anatomical member of a patient's body, or tissue thereof, such as of the foot, knee, back, shoulder, stomach, lung, neck, shoulder, etc.
  • any standard-dose PET scan (even whole body scans) can be predicted using systems and methods described herein.
  • prediction methods, systems, and computer readable media herein are used to transform MR and low-dose PET images, or data obtained therefrom, into a high-dose PET image.
  • the proposed method includes two steps. First, based on the segmented tissues (i.e., cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) in the example) in the MRI image, the appearance features for each patch in the brain image from both low-dose PET and MRI images is extracted to build tissue-specific models that can be used to predict standard, high-dose PET images. Second, a refinement strategy via estimating the predicted image difference is used to further improve the prediction accuracy.
  • the proposed approach has been evaluated on a dataset consisting of eight (8) subjects with MRI, low-dose PET and high-dose PET images, using leave-one-out cross- validation. The proposed method is also compared with the sparse representation (SR) based method. Both qualitative and quantitative results indicate better performance using methods, systems, and computer readable media described herein.
  • SR sparse representation
  • Random forest often called a regression forest (RF) when applied to non-linear regression tasks, was originally proposed by Breiman [4]. It consists of multiple binary decision trees, with each tree trained independently with random features and thresholds. The final prediction of a random forest is the average over the predictions of all its individual trees. As an ensemble method, it has proved to be a powerful tool (e.g., training tool) in the machine learning field, and has recently gained much popularity on both classification and regression problems, such as remote sensing image classification [29, 15], medical image segmentation [22, 27, 46], human diseases/disorders diagnosis [2, 13, 39], facial analysis [6] and so on. Similar to other supervised models, the use of regression forest involves both training and testing stages.
  • RF regression forest
  • regression forest aims to learn a non-linear model for predicting the target t based on the input features f.
  • each binary decision tree is trained independently.
  • a binary decision tree consists of two types of nodes, namely split nodes (non-leaf nodes) and leaf nodes.
  • the optimal combination of j and ⁇ is learned by maximizing the average variance decrease in each dimension of the regression target after splitting.
  • the leaf node stores the average regression target of training samples falling into this node.
  • the training of binary decision tree starts with finding the optimal split at the root node, and recursively proceeds on child nodes until either the maximum tree depth is reached or the number of training samples is too small to split.
  • a new testing sample is pushed through each learned decision tree, starting at the root node.
  • the associated decision stump function g(f ⁇ j, 9) is applied to the testing sample. If the result is false, then this testing sample is sent to the left child; otherwise, it is sent to the right child.
  • the average regression target stored in that leaf node will be taken as the output of this binary decision tree.
  • the final prediction value of the entire forest is the average of outputs from all binary decision trees.
  • the goal of a RF based approach is to predict the intensity of each voxel 17 ⁇ I 3 in a new subject.
  • the approach consists of two major steps or approaches, the initial standard- dose PET prediction step and the incremental refinement step via estimating image differences. Both steps adopt regression forest as the non-linear prediction model. Each approach is discussed in detail below.
  • a human brain Due to the large volume of a human brain (e.g., usually with millions of image voxels), it is intractable to learn a global regression model for predicting the high-dose PET image over the entire brain. Many studies [9, 36] have shown that learning multiple local models would improve the prediction performance, compared with a single global model. Thus, one RF can be learned for each type of tissue.
  • brain tissue models are learned where one model corresponds to white matter (WM), one corresponds to gray matter (GM), and/or one corresponds to cerebrospinal fluid (CSF). Since the appearance variation within each brain tissue is much less than that across different brain tissues, tissue-specific regression forest models yield more accurate predictions than a global regression forest model (trained for the entire brain).
  • the proposed method consists of training stage and testing stage as follows. Non-brain (e.g., body) tissue specific models can also be learned, trained, and/or provided.
  • training data consists of MRI, low-dose PET, and standard-dose PET from different training patients (i.e., "training subjects").
  • Each training subject has one set of MRI images (for example, T1 -weighted images), two sets of low-dose PET images (that are scanned separately, one after the other, with details explained in Section 4.1 below, Datasets and preprocessing) and the one set of corresponding standard-dose PET images.
  • the four images of all training subjects e.g., 1-MR image, 2-low-dose PET images, and 1 -high-dose PET image
  • FLIRT flexible image registration toolbox
  • a brain segmentation method [47] is adopted to segment the entire brain region into WM, GM and CSF for each training subject, based on the respective MRI image.
  • Figure 3 illustrates extracting training data (features and response) to train the tissue-specific regression forests for predicting the initial standard-dose PET image.
  • a prediction node e.g., 100, Figure 13
  • Prediction node(s) described herein include a computing platform having a hardware processor and memory element configured to execute steps for predicting and generating an estimated high-dose PET image without actually having to perform a high- dose PET scan.
  • the testing stage Given a testing subject with both MRI and low- dose PETs, first linearly align the MRI and low-dose PET images onto a common space (as defined in the training stage) by using FLIRT [8], and automatically segment the MRI image into three brain tissues [47]. Then, the high-dose PET image can be predicted in a voxel-wise manner by using the local image appearance information from the aligned MRI and low-dose PET images. Specifically, for each voxel in the unknown standard-dose PET image, similar to the training stage as shown in Figure 3, the prediction node can extract the local intensity patches at the same location from both MRI and low-dose PET images.
  • the prediction node can apply the corresponding tissue-specific regression forest to predict the standard-dose PET value for this voxel. By iterating all image voxels, a standard-dose PET image can be predicted.
  • Table 1 the initial prediction framework with both training and testing (prediction) stages is summarized in Table 1 , as follows:
  • Sub-framework 1 Initially predicting high-dose PET image by using tissue-specific regression forests (RF).
  • WM white matter
  • GM gray matter
  • CSF cerebrospinal fluid
  • tissue-specific RFs for gradually (i.e., incrementally or iteratively) minimizing the image difference between the predicted image and the target image or actual, standard-dose PET images obtained during the training stage.
  • the tissue-specific RFs at iteration k aim to estimate the image difference between the predicted standard-dose PET image by the previous k iterations and the target or actual standard-dose PET image.
  • training tissue-specific RFs as described in the above Subsection 3.1
  • Figure 4 illustrates extracting training data (features and response) to train the tissue-specific RFs for predicting (estimating) the image difference.
  • a prediction node of a special purpose computing platform as described in detail below can learn three tissue-specific regression forests during the training stage, which are configured to predict (estimate) the image difference within a respective tissue region.
  • the new updated prediction may be closer to the target standard-dose PET image, thus improving the prediction accuracy.
  • the learned tissue-specific RFs can be applied sequentially to obtain a final predicted standard-dose PET image.
  • the first iteration e.g., to obtain "Model 1 ", Figure 2)
  • the tissue-specific regression forests in the next iterations e.g., Model 2 to Model N, Figure 2 will be used to sequentially estimate the image difference between the current prediction and the target standard-dose PET image.
  • the estimated image differences by the later regression forests will be sequentially added onto the initially predicted high-dose PET image for incremental refinement.
  • the incremental refinement framework can further boost the prediction accuracy of tissue- specific regression forests.
  • Sub-framework 2 Incremental refinement via estimating the image difference Given: M I, low-dose PET, and previously predicted standard-dose PET images.
  • Leave-one-out cross-validation which has been adopted in numerous papers [20, 37, 43], can be used to evaluate the performance of the outlined approach. Specifically, at each leave-one-out case, seven (7) subjects are selected as training images, and the remaining one is used as a testing image. This process is repeated until each image is taken as the testing image once. In both the training and testing stages, all images from each subject are linearly aligned onto a common space via FLIRT [8]. The dataset and preprocessing steps are described in detail in the following subsection.
  • each element is also investigated, i.e., the effect of MRI to help low-dose PET for predicting standard-dose PET image, the effect of tissue-specific models, the effect of image difference estimation for incremental refinement, and the effect of combining more low-dose PETs with MRI.
  • All experiments include the following parameters: patch size: 9x9x9; the number of trees in a forest: 10; the number of randomly selected features: 1000; the maximum tree depth: 15; the minimum number of samples at each leaf: 5; and the number of iterations in incremental refinement is 2.
  • the method was evaluated on a dataset consisting of eight (8) patients. Patients were chosen from a group referred for PET scans for clinical indications. In each case, the diagnosis was unknown and not used in the analysis. Patients were administered an average of 203 megabecquerel (MBq) (range: 191 MBq to 229 Bq) of an exemplary radiotracer, such as 18F-fluorodeoxyglucose (18F-FDG).
  • MBq megabecquerel
  • the first PET scan (the "standard-dose”, aka, the "high-dose” scan) was performed for a full 12 minutes within sixty minutes of injection, in accordance with standard protocols.
  • a second PET dataset was acquired in list- mode for 12 minutes, which was broken up into separate three-minute sets (the "low-dose” scans). Note that the reduced acquisition time at standard- dose as a surrogate for standard acquisition time at reduced dose. In this case, the "low-dose” is approximately 25% of the standard-dose.
  • each subject In processing, four images for each subject are used: one MRI, two low-dose PETs, and one standard-dose PET. All data was acquired on a Siemens Biograph mMR (a hybrid MR-PET or PET-MR system). Of note, for all subjects, the low-dose PET image sets are the completely separate acquisitions from the standard-dose PET image sets. Moreover, each of the low-dose PET images is a separate acquisition (for simulating the image acquisition at different time points). Meanwhile, a T1 -weighted MR image was also scanned. T1-weighted MR image was affine-aligned to the PET image space.
  • NMSE normalized mean squared error
  • PSNR peak signal-to-noise ratio
  • H is the ground-truth standard-dose PET image
  • H is the predicted high-dose PET image
  • L is the maximal intensity range of images H and H
  • M is the total number of voxels in the image.
  • a good algorithm provides lower NMSE and higher PSNR.
  • anatomical information provided by MRI image can compensate for the molecular information of PET in the PET/MRI imaging system [41].
  • the effect of combining MRI with low-dose PET image for predicting standard-dose PET images is investigated.
  • the following are respectively used 1 ) MRI, 2) one of the low-dose PETs, and 3) the combination of MRI and low-dose PET, to build the global models for predicting standard-dose PET image of the entire brain without separation of the model into tissue-specific components.
  • Table 3 lists the prediction performances, in terms of NMSE and PSNR.
  • Figure 5 illustrates a further comparison between the model built using low-dose PET 1 and the model built using the combination of MRI and low-dose PET 1.
  • Figure 5 is a graphical illustration of performance comparison, in terms of NMSE and PSNR, yielded by two global models built by 1) low-dose PET 1 and 2) MRI + low-dose PET 1.
  • tissue-specific models are built for each type of brain tissue (WM, GM, and CSF) and used for predicting standard-dose PET of the respective brain tissue, while the global model is built for the entire brain and used for predicting whole-brain standard-dose PET.
  • the tissue-specific model and the global model are built using the same MRI plus low-dose PET 1 .
  • Table 4 lists prediction performances, in terms of NMSE and PSNR.
  • Figure 6 is a graphical illustration of performance comparison, in terms of NMSE and PSNR, yielded by a global model and tissue-specific models, respectively.
  • Table 4 and Figure 6 collectively illustrate, it is apparent that the tissue-specific models yield better overall performance compared with the global model, i.e., with lower NMSE and higher PSNR.
  • Figure 6 illustrates a comparison for the prediction performances by using tissue-specific models and a global model.
  • Table 4 compares prediction performances yielded by global model and tissue-specific models, respectively. Each illustrate the tissue-specific models yielding better overall performance than the global model in terms of a model having lower NMSE and higher PSNR.
  • the prediction performance can be further improved by auto-context models [9, 35, 40].
  • the performance improvement by estimating image differences between the previously predicted standard- dose PET and the original standard-dose PET (ground truth) is examined.
  • the term "one-layer model” is the above model that directly estimates high-dose PET as the one-layer model
  • the term "two-layer model” is the above model + image difference estimation as the two-layer model. Note that all these methods use tissue-specific models, built using the MRI plus low-dose PET 1.
  • Table 5 lists the prediction performances, in terms of NMSE and PSNR for multiple subjects. Table 5 compares prediction performances yielded by one-layer model and two-layer model, respectively.
  • Figure 7 is a graphical illustration of the performance comparison between one-layer model and two-layer model. From both Table 5 and Figure 7, it can be seen that, compared with the one- layer model, the two-layer model (ensemble model) achieves better prediction performance, indicated by lower NMSE and higher PSNR values.
  • Figure 7 is the comparison, in terms of NMSE and PSNR, yielded by one- layer tissue-specific model and two-layer tissue-specific model, respectively.
  • both one-layer and two-layer models are built by using the combination of MRI and Low-dose PET 1 .
  • Figure 8 is a graphical comparison between the one-layer model (first layer model, "Model 1 ”) and the two-layer model (two layers model, "Model 1 + 2") on a sequence of voxels with maximal prediction errors using Model 1 . From Table 5 and Figure 7, it is apparent that the overall performance for the entire brain is improved slightly by additionally using "Model 2". However, as shown in Figure 8, for the voxels with maximal prediction errors by Model 1 , the performance improvement by further using Model 2 (e.g., Model 1 + 2) is visibly apparent, especially for some subjects as shown by ("one subject") lines in Figure 8.
  • Model 1 already achieves very good performance, thus affecting the calculation of overall improvement amount by Model 2.
  • Figure 8 is the performance comparison between the proposed Model 1 and the Model 1 + 2, in terms of NMSE and PSNR.
  • the "OVERALL" lines in Figure 8 denote the results from all subjects, while "ONE SUBJECT” lines denote for the results from a selected subject.
  • both Model 1 and Model 1 + 2 use the tissue-specific models built with the combination of MRI and low-dose PET 1. 4.6. Effect of combining more low-dose PETs with MRI in predicting standard-dose PET
  • Figure 9 shows the comparison of prediction performances yielded by using the models constructed with different combinations of modalities as described above.
  • Table 6 compares of prediction performances yielded by the models using different combinations of modalities.
  • Model 1 refers to a one layer model (the first layer model)
  • Model 1 + 2 refers to a two layer model in which the first layer model is used to predict the initial high-dose PET image, and the second layer model is used to estimate the image difference.
  • Both Table 6 and Figure 9 demonstrate that the best performance is yielded by the model built using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
  • Figure 9 is a comparison of prediction performances, in terms of NMSE and PSNR, yielded by the models built with different combinations of modalities.
  • all models (the first layer models (Model 1 ) and the second layer models (Model 2) for three kinds of combinations) use tissue specific models.
  • Equation (3) ⁇ ⁇ D v a v - f(v) ⁇ ⁇ + ⁇ + 1 ⁇ 2
  • f(v) is the feature vector of voxel v, defined as the vector of concatenated intensities of local patches from both MRI and low-dose PET;
  • v is the sparse coefficient of voxel v ⁇ o be estimated;
  • D v is the dictionary of voxel v, consisting of feature vectors of voxels within a small neighborhood of voxel v from all training subjects; ⁇ 1 and ⁇ ⁇ control sparsity and smoothness of the estimated sparse coefficient a v .
  • Equation (5) is the dictionary that contains intensity patches from the high-dose PET images corresponding to the elements in the overall dictionary ⁇ . Then, by taking the center value from P O), the intensity of voxel v at the new predicted standard-dose PET is obtained as in Equation (5) below:
  • C(-) is the operation of taking the center value from a column vector.
  • Figure 10 is an illustration of prediction results between SR and the methods above, (i.e., RF, Model 1 + 2) on the different subjects as shown in the first and second rows, from columns (A) to (H) respectively.
  • the lower contrast "difference" maps (shown in column (E) and column (G)) are computed between the predicted high-dose PET and the original high-dose PET (ground truth).
  • Figure 10 illustrates the qualitative results of predicted standard-dose PET using, respectively, 1 ) the proposed method (regression forest (RF) based method (two-layer model (Model 1 + 2))), and 2), SR based method, on the two randomly selected subjects.
  • RF regression forest
  • Model 1 + 2 two-layer model
  • Table 7 below and Figure 11 show the quantitative comparison between the proposed RF method (including one-layer model (Model 1) and two-layer model (Model 1 + 2)) and an SR-based method, in terms of NMSE and PSNR.
  • Model 1 one-layer model
  • Model 2 + 2 two-layer model
  • Figure 12 shows the quantitative comparison between the proposed RF method (including one-layer model (Model 1) and two-layer model (Model 1 + 2)) and an SR-based method, in terms of NMSE and PSNR.
  • Figure 12 in order to demonstrate the quality improvement of RF predicted standard-dose PET image over the original low-dose PET image, both NMSE and PSNR for the low-dose PET with respect to the ground-truth (original standard-dose PET) is also calculated.
  • the parameters' settings for the instant and improved RF method (and RF-based models) are same as the settings described in the above Subsection 4.6.
  • Table 7 compares prediction performances, in terms of NMSE and PSNR.
  • the term "Low-dose PET 1 and 2 (Mean)" is indicative of the average (NMSE or PSNR) of low-dose PET 1 and low-dose PET 2 with respect to the ground truth.
  • the RF-based models i.e., RF (Model 1) and RF (Model 1 + 2), use tissue-specific models. All methods, including SR, are built by using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
  • RF models e.g. , RF(Model 1 + 2) is improved over SR techniques.
  • RF models achieves more desirable predictions than the SR technique, and with much smaller difference magnitudes (see e.g., column (G) in Figure 10) and are more similar in regards to image appearance with the ground-truth.
  • FIG. 1 1 is a graphical plot comparing prediction performances, in terms of NMSE and PSNR, with respect to the ground truth (original high-dose PET).
  • all models are built by using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
  • the regression forest (RF) based models i.e., RF (Model 1) and RF (Model 1 + 2), use tissue-specific models and not a global model.
  • Figure 12 is a comparison of image quality, in terms of NMSE and PSNR, with respect to the ground truth (i.e., the original high-dose PET).
  • Low-dose PET 1 and 2 is the average (NMSE or PSNR) of low-dose PET 1 and low-dose PET 2 with respect to the ground truth.
  • RF(Model 1 + 2) stands for the (NMSE or PSNR) value of predicted high-dose PET image with respect to the ground truth, by using the regression forest based method, i.e., RF(Model 1 + 2).
  • RF(Model 1 + 2) also uses tissue-specific models, not a global model.
  • the limited prediction accuracy of SR may be due to two or more reasons, for example, one reason being that both MRI and low-dose PET modalities are treated equally in the sparse representation, and a possible second reason being that only linear prediction models are adopted, which might be insufficient to capture the complex relationship among MRI, low- dose PET, and high-dose PET.
  • the instant and improved RF- based method adopts RF to simultaneously identify informative features from MRI and low-dose PET for predicting and generating an estimated standard- dose PET images, and further learn the intrinsic relationship among MRI, low-dose PET and standard-dose PET. Consequently, by addressing the limitations of SR, the proposed method (e.g., RF method discussed above) achieves much higher prediction accuracy.
  • novel methods, systems, and computer readable media are disclosed, in which a standard, high-dose PET image is predicted using a machine (e.g., computing platform) learning based framework to generate a prediction of a standard, high-dose PET image.
  • a machine e.g., computing platform
  • the proposed method utilizes low-dose PET, combined with an MR structural image, to predict standard-dose PET. Results show and described herein illustrate that the instant method substantially improves the quality of low-dose PET. The prediction performance obtained also indicates a good practicability of the proposed framework.
  • each element in the methods discussed above has its own contribution in improving the prediction performance.
  • high-resolution brain anatomical information provided by MRI helps low-dose PET to predict standard-dose PET.
  • the complementary information from different modalities significantly improves the prediction results.
  • the tissue-specific model gains better prediction performance than the global model. The main reason is that, due to the large volume of human brain (often with different tissue properties); it is difficult to learn a global regression model for accurate prediction of standard-dose PET over the entire brain. In contrast, learning multiple tissue-specific models improved the prediction performance as indicated in both Table 4 and Figure 6 discussed above.
  • tissue-specific models can be trained simultaneously, thus the training time can also be reduced significantly. Furthermore, by estimating image differences between previously-predicted standard-dose PET and the original standard-dose PET, the prediction accuracy can be further improved, especially for the voxels with maximal prediction errors using the previous layer model, as shown in Figure 8 discussed above.
  • Figure 13 is a block diagram illustrating an exemplary system or node
  • Node 100 (e.g., a single or multiple processing core computing device or computing platform) for predicting standard, high-dose PET values for generating an estimated high-dose PET images according to embodiments of the subject matter described herein.
  • Node 100 may include any suitable entity, such as a computing device or computing platform for performing one more aspects of the present subject matter described herein or in the manuscript entitled "Prediction of High-dose PET Image with MRI and Low- dose PET images"; the disclosure of which is incorporated herein by reference in its entirety.
  • components, computing modules, and/or portions of node 100 may be implemented or distributed across one or more (e.g., multiple) devices or computing platforms.
  • a cluster of nodes 100' may be used to perform various portions of high-dose PET image prediction, refinement, and/or application.
  • node 100 and its components and functionality described herein constitute a special purpose test node, special purpose computing device, or machine that improves the technological field of brain and/or body imaging (e.g., MR and/or PET imaging) by allowing prediction and generation of a high-dose PET image without performing a high-dose PET scan, thus advantageously reducing a patient's exposure to radiation.
  • MR and PET imaging, and improvements thereto are necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks (i.e., the need to predict a high- dose PET scan from MR and low-dose PET scans).
  • the methods, systems, and computer readable media described herein are not capable of manual processing (i.e., such cannot be manually performed by a human being), as such, the methods, systems, and computer readable media described herein are achieved upon utilization of physical computing hardware components, devices, and/or machines necessarily rooted in computer technology.
  • node 100 includes a computing platform that includes one or more processors 102.
  • processor 102 includes a hardware processor or microprocessor, such as a multi-core processor, or any suitable other processing core, including processors for virtual machines, adapted to execute or implement instructions stored by an access memory 104.
  • Memory 104 may include any non-transitory computer readable medium and may be operative to communicate with one or more of processors 102.
  • Memory 104 may include and/or have stored therein a standard or high-dose PET Prediction Module (HDPPM) 106 for execution by processor 102.
  • HDPPM high-dose PET Prediction Module
  • HDPPM 106 may be configured to extract appearance information or features from corresponding MR and low-dose PET image locations or patches, segment the MR image based upon a tissue type (e.g., GM, WM, or CSF), classify the MR and low-dose PET image locations or patches per tissue type, and execute and apply a tissue specific model to the extracted information for predicting a high-dose PET image, per voxel, and for generating an estimated (e.g., not actual
  • Processor 102 may predict, and transmit as output, a high-dose PET image generated from a plurality of predicted high-dose PET voxels via HDPPM 106.
  • node 100 obviates the need for performing a high-dose PET scan, and instead predicts and generates a high-dose PET image from at least one low-dose PET image and at least one MR image.
  • This is advantageous, as a patient's exposure to radiation is minimized, in some aspects by approximately 1/2, 1/4, or by 1/10 or less.
  • the low-dose PET image and MR image may be obtained simultaneously (e.g., via a PET/MRI scanning system) for faster prediction/generation of a high-dose PET image.
  • the low-dose PET and MR images may be obtained separately (i.e., non-simultaneously) from separate PET/MR scanning machines or imaging systems.
  • HDPPM 106 is configured to implement one or more RF-based analysis and/or modeling techniques for use in prediction of high-dose PET images. Exemplary RF techniques or models described above may be used, executed, and/or implemented by HDPPM 106. For example, HDPPM 106 may execute one or more tissue specific models using, as inputs, appearance features extracted from at least one MR image (anatomical imaging features) and at least one low-dose PET image (molecular imaging features). HDPPM 106 may be configured to initially predict high-dose PET values and/or a high-dose PET image using tissue- specific RF modeling. HDPPM 106 may then incrementally refine the predicted values and predicted high-dose PET image via machine estimated image differences. The estimated differences may be applied to the previously predicted and generated standard-dose PET values and image, respectively, for incremental refinement, where desired. HDPPM 106 may be used to predict high-dose PET values for generating estimated high-dose PET images of the brain and/or body.
  • node 100 receives extracted appearance features from at least one low-dose PET image and at least MR image of a subject brain or body portion.
  • node 100 is configured to receive the images as input, and extract the appearance features from at least one low-dose PET image and at least one corresponding MR image via HDPPM 106.
  • the corresponding MR/low-dose PET images may be received and aligned onto a common space, and appearance features (e.g., local intensity patches at a same location) may be extracted by HDPPM 106.
  • the MR image may also be segmented by tissue at each location (e.g., into GM, WM, CSF in the brain) by HDPPM 106.
  • tissue-specific RF is used to predict a high-dose PET value for each voxel.
  • a high-dose PET image can be predicted and generated at HDPPM 106.
  • HDPPM 106 may further refine the predicted image by applying tissue- specific regression forests (e.g., models) to extracted appearance features to get the predicted difference values for each voxel.
  • tissue-specific regression forests e.g., models
  • HDPPM 106 may be configured to work in parallel with a plurality of processors (e.g., processors 102) and/or other computing platforms or nodes.
  • a plurality of processor cores may each be associated with a tissue specific model and/or imaging technique (e.g., receiving MR or low-dose PET features).
  • Figure 13 is for illustrative purposes and that various nodes, their locations, and/or their functions may be changed, altered, added, or removed. For example, some nodes and/or functions may be combined into a single entity. In a second example, a node and/or function may be located at or implemented by two or more nodes.
  • the methods, systems, and computer readable media described herein can improve imaging during an uptake interval or time ⁇ , by improving image quality without further increasing tracer dosage.
  • multiple scans may be taken over an uptake time spanning ⁇ , which is the time over which a tracer is metabolized, until reaching a steady state Tss.
  • the shorter, almost dynamically obtained scans i.e., obtained at each b
  • Figure 15 is a block diagram illustrating an exemplary method of predicting and generating an estimated high-dose PET image without actually performing a high-dose PET scan. The method may be performed at a computing node (e.g., 100, Figure 13) having a processor and executable instructions stored thereon, such that when executed by the processor of a computer control the computer to perform steps, such as those in Figure 15.
  • appearance features may be extracted from at least one MR image.
  • Appearance features include information or data regarding a tissue structure, anatomical information, molecular information, or functional information (e.g., tissue perfusion, diffusion, vascular permeability, or the like) and/or information per image location, as indicated by a local intensity.
  • the MR image may also be segmented or categorized (e.g., by location) upon a tissue type (e.g., GM, WM, CSF in the brain), where needed (e.g., brain imaging).
  • appearance features may be extracted from at least one low-dose PET image.
  • Appearance features obtained from low-dose PET imaging may include a local intensity that is indicative of metabolic information derived from impingement of gamma rays to tissue injected with a biologically active radioactive tracer.
  • the information obtained from low- dose PET image is associated with tissue metabolic activity.
  • the appearance features of MR/low-dose PET images can be aligned, classified per tissue type, and input into tissue-specific RF (e.g., models) for predicting high-dose PET values per voxel, from which a high-dose PET image is generated.
  • the predicted values may be iteratively refined as described above (see, e.g., Table 2).
  • a standard, estimated high-dose PET image may be generated using the appearance features of the MR image and the low-dose PET image, and the high-dose PET values predicted therefrom.
  • the appearance features include local intensity patches, to which tissue specific RF is applied in predicting high-dose PET values, per voxel. By iterating all image voxels, a high-dose PET image can be predicted and generated, without subjecting a patient to a high-dose PET scan.
  • Figure 16 is a schematic block diagram illustrating an overview of training a model M for high-dose PET prediction and image generation via model learning (e.g., machine learning).
  • the methods, systems, and computer readable media include machine learning-based methodology, for example, a computing machine having a decision tree-based (e.g., RF) prediction of a high-dose PET image using MR and low-dose PET images.
  • the machine learning methodology includes two main stages, i.e., a training stage and an application stage.
  • estimated high-dose PET images may be used for at least one of diagnosis, treatment, and/or treatment planning of one or more patients.
  • one task includes learning decision trees for generating an RF model.
  • Multiple trees can be grouped to form a forest, and in the case of regression, the random forest is often called a regression forest (RF).
  • RF regression forest
  • learned parameters of the tree are stored at each node (i.e., a split node or leaf/terminal node).
  • the input is a set of voxels, and the corresponding high-dose PET intensities, as shown in Figure 16.
  • Each voxel is represented by a pair of MR and low-dose PET patches Pi and P 2 .
  • the goal of training is to learn multiple trees (a forest, as shown in Figure 17) for best predicting the high- dose PET intensity P3 from a pair of MR and low-dose PET patches Pi and P 2 .
  • a high-dose PET image is predicted voxel by voxel.
  • a pair of MR and low-dose PET images is extracted and centered at that respective voxel (i.e., as shown in Figure 18).
  • the high-dose PET intensity at that voxel can be calculated or generated.
  • a split decision or node i.e., one node of Tree
  • each split node (solid circles) has at least one "leaf” shown in broken circles) stores a split function's parameters, including one selected feature index and its corresponding threshold.
  • a feature vector f is computed as the concatenated vector from a pair of MRI and low-dose PET patches.
  • the parameters stored at the /-th split node include one selected feature index ⁇ ( ⁇ ) and the corresponding threshold ⁇ ( ⁇ ).
  • leaf node i.e.
  • each leaf is indicated in a broken circle labeled as 4, 6, 8, 9, 10, and 1 1
  • stores the mean high-dose PET intensity e.g., / mea n 0 '
  • °f voxels reached at this y ' -th node e.g., where nodes are indicated in solid circles labeled as 1 , 2, 3, 5, and 7.
  • each tree Ti to ⁇ is a prediction model (e.g. , a RF or regression model) or prediction result.
  • the input to each tree is a MRI patch and its corresponding low-dose PET patch.
  • the output from each tree is the predicted high-dose PET intensity at the center location of the given MRI patch.
  • RF models consist of multiple trees, and the final prediction of a RF model is the average of all predicted values from all individual trees.
  • Each tree is also referred to as a binary decision tree, and is a prediction model, similar to linear regression model, but it is used for non-linear regression problems.
  • the output from each tree is a high-dose PET intensity value predicted at a center location of a given MR/low-dose PET patch.
  • The. average of all trees in the forest is the final prediction result.
  • the prediction process of Figure 17 is repeated patch by patch. Specifically, for each location, MRI and low-dose PET patches are extracted to predict the high-dose PET intensity at a given location.
  • the split function shown in Figure 17 includes a specific feature and a threshold.
  • the type of feature and the value of threshold are automatically learned according to the training data.
  • the best combinations of feature and threshold will be learned to predict the high- dose PET from features of MR and low-dose PET patches.
  • the split functions can be fixed and applied to a new subject.
  • Figure 18 is a specific example associated with the application stage, where an RF model has been learned and can now be applied to the extracted MR/low-dose PET features or intensity values for predicting high- dose PET intensities.
  • a 2D case is shown.
  • the 3D case can be readily derived using the 2D case as an example.
  • Figure 18 illustrates MR and low-dose PET images for a new subject. That is, each new subject will have at least one MR image and at least one low-dose PET image generated for a brain or non-brain body part.
  • two 3 x 3 patches are extracted (i.e.
  • the feature vector f is passed through each learned decision tree
  • the prediction of each tree is the mean high-dose PET intensity stored at the leaf node, where this voxel falls into.
  • the routing of the voxel in one tree is as follows.
  • Figure 18 illustrates a specific example of a voxel-wise prediction procedure for one tree.
  • the input is the feature vector f (i.e., extracted for MR/low-dose PET patches).
  • the output is the final predicted intensity value of the voxel in a predicted high-dose PET image F P .
  • the feature vector f is fed to the tree TT, and firstly f reaches at node 1 (i.e., the double circle with at "1 " in it), a split node.
  • a learned split function (i.e. , as listed in Table 1 of Figure 18) is applied to f.
  • the value of the inequality is a decision in the RF decision tree, and ⁇ ⁇ ( ⁇ )> 0(1 ) is true, so f goes to right child, i.e., node 3. (See decision tree Tj in Figure 18, where nodes are labeled 1-1 1 ).
  • a RF based framework of high-dose PET image prediction and generation is proposed, for effectively predicting and generating standard, high-dose PET by using the combination of MRI and low-dose PET, thus reducing the radionuclide dose.
  • tissue-specific models are built via RF framework for separately predicting standard-dose PET values and images in different tissue types, such as GM, WM, and CSF type brain tissue.
  • an incremental refinement strategy is also employed for estimating an image difference for refining the predicted high-dose PET values and image. Results described herein illustrate that this method can achieve very promising, accurate, machine learned prediction for generation of standard-dose PET images.
  • the proposed method outperforms the SR technique under various comparisons.
  • aspects as disclosed herein can provide, for example and without limitation, one or more of the following beneficial technical effects: minimized exposure of patients to radiation; improved imaging at lower dosages of radionuclides; improved accuracy/refinement of predicted image; obtaining faster results (e.g., via simultaneous MR/PET imaging).
  • the methods, systems, and computer readable media herein are performed at a predictions node (e.g., Figure 13).
  • the prediction node and/or functionality associated with the prediction node as described herein constitute a special purpose computer. It will be appreciated that the prediction node and/or functionality described herein improve the technological field pertaining to brain and/or body imaging occurring at a special MR and/or PET imaging machine, which may be combined or separated. Predicting high-dose PET imaging via a prediction node is necessarily rooted in computer technology as such overcomes a problem specifically arising in the realm of computer networks, for example, obtaining a high-dose PET image without having to actually perform the high-dose PET image.
  • Some embodiments of the present subject matter can utilize devices, systems, methods, and/or computer readable media as, such as described in any of the following publications, each publication of which is hereby incorporated by reference as if set forth fully herein:
  • Bai, W., Brady, M., 201 Motion correction and attenuation correction for respiratory gated PET images. IEEE Trans. Med. Imaging 30, 351-365.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Molecular Biology (AREA)
  • Optics & Photonics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pulmonology (AREA)
  • General Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Nuclear Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

L'invention concerne des procédés, des systèmes et des supports lisibles par ordinateur pour prédire des valeurs et/ou images de tomographie par émission de positons (TEP) à haute dose. Un procédé permettant de prédire et de générer une image de TEP à haute dose est réalisé au niveau d'un nœud de prédiction comprenant au moins un processeur informatique, et comprend l'extraction d'éléments d'apparence à partir d'au moins une image de résonance magnétique (MR), l'extraction d'éléments d'apparence à partir d'au moins une image de TEP à faible dose, et la génération d'une image de TEP à haute dose en utilisant les éléments d'aspect de la ou des images de RM et de la ou des images de TEP à faible dose.
PCT/US2015/047425 2014-08-29 2015-08-28 Restauration de la qualité d'image de tomographie par émission de positons (tep) à dose réduite de radiotraceur en utilisant la pet et la résonance magnétique (rm) combinées Ceased WO2016033458A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462044154P 2014-08-29 2014-08-29
US62/044,154 2014-08-29

Publications (1)

Publication Number Publication Date
WO2016033458A1 true WO2016033458A1 (fr) 2016-03-03

Family

ID=55400657

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/047425 Ceased WO2016033458A1 (fr) 2014-08-29 2015-08-28 Restauration de la qualité d'image de tomographie par émission de positons (tep) à dose réduite de radiotraceur en utilisant la pet et la résonance magnétique (rm) combinées

Country Status (1)

Country Link
WO (1) WO2016033458A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215093A (zh) * 2018-07-27 2019-01-15 深圳先进技术研究院 低剂量pet图像重建方法、装置、设备及存储介质
WO2019081256A1 (fr) * 2017-10-23 2019-05-02 Koninklijke Philips N.V. Optimisation de la conception des systèmes de tomographie par émission de positrons (pet) à l'aide de l'imagerie profonde
CN109949318A (zh) * 2019-03-07 2019-06-28 西安电子科技大学 基于多模态影像的全卷积神经网络癫痫病灶分割方法
WO2019204146A1 (fr) * 2018-04-18 2019-10-24 Sony Interactive Entertainment Inc. Incorporation de contexte de capture de dynamique d'image
CN110753935A (zh) * 2017-04-25 2020-02-04 小利兰·斯坦福大学托管委员会 用于医学成像的使用深度卷积神经网络的剂量减少
CN112384279A (zh) * 2018-06-18 2021-02-19 皇家飞利浦有限公司 治疗规划设备
WO2021061710A1 (fr) * 2019-09-25 2021-04-01 Subtle Medical, Inc. Systèmes et procédés pour améliorer une irm améliorée par contraste volumétrique à faible dose
US20210196219A1 (en) * 2019-12-31 2021-07-01 GE Precision Healthcare LLC Methods and systems for motion detection in positron emission tomography
WO2021182103A1 (fr) * 2020-03-11 2021-09-16 国立大学法人筑波大学 Programme de génération de modèle entraîné, programme de génération d'image, dispositif de génération de modèle entraîné, dispositif de génération d'image, procédé de génération de modèle entraîné et procédé de génération d'image
WO2022120588A1 (fr) * 2020-12-08 2022-06-16 深圳先进技术研究院 Procédé et système de restauration d'image tep à faible dose, dispositif et support
US20220334208A1 (en) * 2019-09-25 2022-10-20 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced mri
WO2023272491A1 (fr) * 2021-06-29 2023-01-05 深圳高性能医疗器械国家研究院有限公司 Procédé de reconstruction d'image pet basé sur un apprentissage de dictionnaire conjoint et un réseau profond
US11918390B2 (en) 2019-12-31 2024-03-05 GE Precision Healthcare LLC Methods and systems for motion detection in positron emission tomography

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BERND J. PICHLER ET AL.: "PET/MRI: paving the way for the next generation of clinical multimodality imaging applications", J NUCL MED, vol. 51, 11 February 2010 (2010-02-11), pages 333 - 336 *
CHRISTOPHER COELLO ET AL.: "Correction of partial volume effect in 18F-FDG PET brain studies using coregistered MR volumes: voxel based analysis of tracer uptake in the white matter", NEUROIMAGE, vol. 72, 28 January 2013 (2013-01-28), pages 183 - 192 *
FLEMMING LITTRUP ANDERSEN ET AL.: "Combined PET/MR imaging in neurology: MR-based attenuation correction implies a strong spatial bias when ignoring bone", NEUROIMAGE, vol. 84, 29 August 2013 (2013-08-29), pages 206 - 216 *
JIAYIN KANG ET AL.: "Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images", MED. PHYS., vol. 42, no. 9, 18 August 2015 (2015-08-18), pages 5301 - 5309 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110753935A (zh) * 2017-04-25 2020-02-04 小利兰·斯坦福大学托管委员会 用于医学成像的使用深度卷积神经网络的剂量减少
WO2019081256A1 (fr) * 2017-10-23 2019-05-02 Koninklijke Philips N.V. Optimisation de la conception des systèmes de tomographie par émission de positrons (pet) à l'aide de l'imagerie profonde
US11748598B2 (en) 2017-10-23 2023-09-05 Koninklijke Philips N.V. Positron emission tomography (PET) system design optimization using deep imaging
WO2019204146A1 (fr) * 2018-04-18 2019-10-24 Sony Interactive Entertainment Inc. Incorporation de contexte de capture de dynamique d'image
US11967127B2 (en) 2018-04-18 2024-04-23 Sony Interactive Entertainment Inc. Context embedding for capturing image dynamics
CN112384279A (zh) * 2018-06-18 2021-02-19 皇家飞利浦有限公司 治疗规划设备
CN112384279B (zh) * 2018-06-18 2023-08-22 皇家飞利浦有限公司 治疗规划设备
CN109215093B (zh) * 2018-07-27 2022-12-23 深圳先进技术研究院 低剂量pet图像重建方法、装置、设备及存储介质
CN109215093A (zh) * 2018-07-27 2019-01-15 深圳先进技术研究院 低剂量pet图像重建方法、装置、设备及存储介质
CN109949318A (zh) * 2019-03-07 2019-06-28 西安电子科技大学 基于多模态影像的全卷积神经网络癫痫病灶分割方法
CN109949318B (zh) * 2019-03-07 2023-11-14 西安电子科技大学 基于多模态影像的全卷积神经网络癫痫病灶分割方法
WO2021061710A1 (fr) * 2019-09-25 2021-04-01 Subtle Medical, Inc. Systèmes et procédés pour améliorer une irm améliorée par contraste volumétrique à faible dose
US20230296709A1 (en) * 2019-09-25 2023-09-21 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced mri
JP2022550688A (ja) * 2019-09-25 2022-12-05 サトゥル メディカル,インコーポレイテッド 低投与量容積造影mriを改良するためのシステム及び方法
US20220334208A1 (en) * 2019-09-25 2022-10-20 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced mri
US11624795B2 (en) * 2019-09-25 2023-04-11 Subtle Medical, Inc. Systems and methods for improving low dose volumetric contrast-enhanced MRI
US20210196219A1 (en) * 2019-12-31 2021-07-01 GE Precision Healthcare LLC Methods and systems for motion detection in positron emission tomography
US11918390B2 (en) 2019-12-31 2024-03-05 GE Precision Healthcare LLC Methods and systems for motion detection in positron emission tomography
US11179128B2 (en) * 2019-12-31 2021-11-23 GE Precision Healthcare LLC Methods and systems for motion detection in positron emission tomography
WO2021182103A1 (fr) * 2020-03-11 2021-09-16 国立大学法人筑波大学 Programme de génération de modèle entraîné, programme de génération d'image, dispositif de génération de modèle entraîné, dispositif de génération d'image, procédé de génération de modèle entraîné et procédé de génération d'image
JPWO2021182103A1 (fr) * 2020-03-11 2021-09-16
CN115243618A (zh) * 2020-03-11 2022-10-25 国立大学法人筑波大学 训练完毕模型生成程序、图像生成程序、训练完毕模型生成装置、图像生成装置、训练完毕模型生成方法以及图像生成方法
JP7527675B2 (ja) 2020-03-11 2024-08-05 国立大学法人 筑波大学 学習済モデル生成プログラム、画像生成プログラム、学習済モデル生成装置、画像生成装置、学習済モデル生成方法及び画像生成方法
US12367620B2 (en) 2020-03-11 2025-07-22 University Of Tsukuba Trained model generation program, image generation program, trained model generation device, image generation device, trained model generation method, and image generation method
WO2022120588A1 (fr) * 2020-12-08 2022-06-16 深圳先进技术研究院 Procédé et système de restauration d'image tep à faible dose, dispositif et support
US12412369B2 (en) 2020-12-08 2025-09-09 Shenzhen Institutes Of Advanced Technology Low-dose PET image restoration method and system, device, and medium
WO2023272491A1 (fr) * 2021-06-29 2023-01-05 深圳高性能医疗器械国家研究院有限公司 Procédé de reconstruction d'image pet basé sur un apprentissage de dictionnaire conjoint et un réseau profond

Similar Documents

Publication Publication Date Title
US12115015B2 (en) Deep convolutional neural networks for tumor segmentation with positron emission tomography
WO2016033458A1 (fr) Restauration de la qualité d'image de tomographie par émission de positons (tep) à dose réduite de radiotraceur en utilisant la pet et la résonance magnétique (rm) combinées
US12171542B2 (en) Systems and methods for estimating histological features from medical images using a trained model
Ramon et al. Improving diagnostic accuracy in low-dose SPECT myocardial perfusion imaging with convolutional denoising networks
Kang et al. Prediction of standard‐dose brain PET image by using MRI and low‐dose brain [18F] FDG PET images
Yang et al. MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning
US11020077B2 (en) Simultaneous CT-MRI image reconstruction
Huynh et al. Estimating CT image from MRI data using structured random forest and auto-context model
US8655040B2 (en) Integrated image registration and motion estimation for medical imaging applications
Zaidi et al. Novel quantitative PET techniques for clinical decision support in oncology
US11995745B2 (en) Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction
Sucharitha et al. RETRACTED: Deep learning aided prostate cancer detection for early diagnosis & treatment using MR with TRUS images
US10910101B2 (en) Image diagnosis support apparatus, image diagnosis support method, and image diagnosis support program
Jiang et al. Super resolution of pulmonary nodules target reconstruction using a Two-Channel GAN models
US20200261032A1 (en) Automatic identification and segmentation of target regions in pet imaging using dynamic protocol and modeling
Kang et al. Prediction of standard-dose PET image by low-dose PET and MRI images
Turco et al. Partial volume and motion correction in cardiac PET: First results from an in vs ex vivo comparison using animal datasets
Wang et al. A preliminary study of dual‐tracer PET image reconstruction guided by FDG and/or MR kernels
Elnakib Developing advanced mathematical models for detecting abnormalities in 2D/3D medical structures.
Liu et al. Improving Automatic Segmentation of lymphoma with Additional Medical Knowledge Priors
Hachama et al. A classifying registration technique for the estimation of enhancement curves of DCE-CT scan sequences
Meharban et al. A comprehensive review on MRI to CT and MRI to PET image synthesis using deep learning
Hosseinabadi et al. Left Atrial Segmentation with nnU-Net Using MRI
Roy et al. 5 Enhancing with Modality-Based Patient Care Image Registration in Modern Healthcare
Jaganathan et al. MultiResolution 3D Magnetic Resonance Imaging Analysis for Prostate Cancer Imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15835060

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15835060

Country of ref document: EP

Kind code of ref document: A1