[go: up one dir, main page]

US20250221670A1 - Method and system to compute hemodynamic parameters - Google Patents

Method and system to compute hemodynamic parameters Download PDF

Info

Publication number
US20250221670A1
US20250221670A1 US18/408,135 US202418408135A US2025221670A1 US 20250221670 A1 US20250221670 A1 US 20250221670A1 US 202418408135 A US202418408135 A US 202418408135A US 2025221670 A1 US2025221670 A1 US 2025221670A1
Authority
US
United States
Prior art keywords
data
perfusion data
signal
perfusion
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/408,135
Inventor
Thierry Galas
Theo Champion
Charly Emmanuel Girot
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GE Precision Healthcare LLC
Original Assignee
GE Precision Healthcare LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GE Precision Healthcare LLC filed Critical GE Precision Healthcare LLC
Priority to US18/408,135 priority Critical patent/US20250221670A1/en
Assigned to GE Precision Healthcare LLC reassignment GE Precision Healthcare LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAMPION, Theo, GALAS, THIERRY, GIROT, Charly Emmanuel
Priority to EP24221472.4A priority patent/EP4585158A1/en
Priority to CN202411952056.7A priority patent/CN120298293A/en
Publication of US20250221670A1 publication Critical patent/US20250221670A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/507Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for determination of haemodynamic parameters, e.g. perfusion CT
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/02028Determining haemodynamic parameters not otherwise provided for, e.g. cardiac contractility or left ventricular ejection fraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/026Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/563Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution of moving material, e.g. flow contrast angiography
    • G01R33/56366Perfusion imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C10/00Computational theoretical chemistry, i.e. ICT specially adapted for theoretical aspects of quantum chemistry, molecular mechanics, molecular dynamics or the like
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/021Measuring pressure in heart or blood vessels
    • A61B5/02108Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics
    • A61B5/02125Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics of pulse wave propagation time
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/026Measuring blood flow
    • A61B5/0263Measuring blood flow using NMR
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Definitions

  • the subject matter disclosed herein relates to the use of deep neural networks to obtain hemodynamic parameters by identifying a non-parametric model from computed tomography (CT) perfusion.
  • CT computed tomography
  • Non-invasive imaging technologies allow images of the internal structures or features of a patient or object to be obtained without performing an invasive procedure on the patient or object.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • US ultrasonography
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • Such non-invasive imaging technologies rely on various physical principles (such as the differential transmission of X-rays through a target volume, the reflection of acoustic waves within the volume, the paramagnetic properties of different tissues and materials within the volume, the breakdown of targeted radionuclides within the body, and so forth) to acquire data and to construct images or otherwise represent the observed internal features of the patient/object.
  • computed tomography perfusion is an imaging modality used to evaluate microcirculation in tissues.
  • Computed tomography perfusion imaging allows determination of absolute regional measurements of hemodynamic parameters (e.g., blood flow (BF), blood volume (BV), mean transit time (MTT), time to maximum (TMAX)).
  • hemodynamic parameters e.g., blood flow (BF), blood volume (BV), mean transit time (MTT), time to maximum (TMAX)
  • Visually coded e.g., color coded, gray scale, annotated, and so forth
  • maps of these hemodynamic parameters e.g., hemodynamic parametric maps
  • Threshold or baseline values of these hemodynamic parameters may be established to monitor modifications in microcirculation, which may be used to characterize various pathologies, such as ischemia in organs (e.g., brain, myocardium, lung), tumors neo-vascularization state and changes, specific organs (e.g., liver, kidney, lung) characteristics, etc.
  • organs e.g., brain, myocardium, lung
  • tumors neo-vascularization state and changes e.g., specific organs (e.g., liver, kidney, lung) characteristics, etc.
  • DL deep learning
  • a series of images are acquired for a region of interest (e.g., a tissue), which include images taken before, during, and after an injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)) to the region of interest.
  • a contrast agent e.g., tracer bolus or marking blood with other way (e.g., ASL)
  • Deep learning algorithms are trained using synthetic (e.g., simulated) data, which is generated based on the 4D computed tomography perfusion data, to obtain a residual impulse function Q(t) of the region of interest.
  • the deep learning algorithms are also trained to reduce/mitigate the image non-idealities in the 4D computed tomography perfusion data.
  • Neural networks trained in this manner are used to estimate the residual impulse function Q(t) of the region of interest, which is used to determine corresponding hemodynamic parameters of the region of interest.
  • a method for calculating hemodynamic parameters.
  • a set of perfusion data is acquired for a region of interest using an imaging system.
  • An artery signal is obtained from the set of perfusion data.
  • a tissue signal is obtained from the set of perfusion data.
  • the artery signal and the tissue signal are provided as inputs to one or more neural networks to determine one or more hemodynamic parameters for the region of interest.
  • the one or more neural networks are trained using synthetic data.
  • the set of perfusion data may comprise computed tomography (CT) perfusion data.
  • CT computed tomography
  • the set of perfusion data may comprise magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data.
  • the one or more synthetic data are generated based on a defined ground truth model.
  • the tissue signal is a convolution of the artery signal and a residual impulse function of the region of interest. In such an embodiment the one or more hemodynamic parameters may be determined from the residual impulse function.
  • the one or more neural networks are trained to correct image non-idealities in the set of perfusion data.
  • the one or more hemodynamic parameters comprise at least one of a blood flow (BF), a blood volume (BV), a mean transit time (MTT), or time to max (TMAX).
  • a system comprising one or more processors and memory accessible by the one or more processors, and the memory stories instructions.
  • the instructions when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving a set of perfusion data acquired using an imaging system to image a region of interest; obtaining an artery signal from the set of perfusion data; obtaining a tissue signal from the set of perfusion data; and providing the artery signal and the tissue signal to serve as inputs to one or more neural networks to determine one or more hemodynamic parameters for the region of interest, wherein the one or more neural networks are trained using one or more synthetic data.
  • the set of perfusion data may comprise computed tomography (CT) perfusion data.
  • CT computed tomography
  • the set of perfusion data may comprise magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data.
  • the one or more synthetic data are generated based on a defined ground truth model.
  • the tissue signal is a convolution of the artery signal and a residual impulse function of the region of interest. In such an embodiment the one or more hemodynamic parameters may be determined from the residual impulse function.
  • the one or more neural networks are trained to correct image non-idealities in the set of perfusion data.
  • the one or more hemodynamic parameters comprise at least one of a blood flow (BF), a blood volume (BV), a mean transit time (MTT), or a time to maximum (TMAX).
  • a method for training one or more neural networks.
  • a set of synthetic residual impulse functions for a region of interest is generated based on a defined ground truth model.
  • An artery signal is obtained from a set of perfusion data.
  • a synthetic tissue signal is generated based on the set of synthetic residual impulse function and the artery signal.
  • the one or more neural networks are trained using a signal generated using the synthetic tissue signal and the artery signal.
  • the synthetic tissue signal may comprise a perturbation related to perturbating of the perfusion data.
  • the perturbation may be associated with registration errors that may occur during perfusion data acquisitions (e.g., patient movements during acquisition) or may be associated with noise originating from the image acquisition technique being used (e.g., gaussian noise, artefacts, speckle noise, bolus superposition).
  • the set of perfusion data may comprise computed tomography (CT) perfusion data.
  • the set of perfusion data may comprise magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • a loss is used as a bias for the training of the one or more neural networks.
  • the loss is determined based on a comparison of a first set of parameters derived from an estimated residual impulse function output from the one or more neural networks and a second set of parameters derived from the synthetic residual impulse function.
  • the first set of parameters may comprise a first set of hemodynamic parameters and the second set of parameters may comprise a second set of hemodynamic parameters.
  • the training of the one or more neural networks is determined to be finished when the loss is less than a threshold or after a number of epochs.
  • a regularization is used as a bias for the training of the one or more neural networks.
  • the regularization is associated with characteristics of an estimated residual impulse function output from the one or more neural networks.
  • the regularization may be weighted by the deconvolution error.
  • FIG. 1 depicts an example of an artificial neural network for training a deep learning model, in accordance with aspects of the present disclosure
  • FIG. 2 is a block diagram depicting components of a computed tomography (CT) imaging system, in accordance with aspects of the present disclosure
  • FIG. 3 depicts a block diagram of a computing system used to analyze images obtained from the computed tomography imaging system of FIG. 2 , in accordance with aspects of the present disclosure
  • FIG. 4 depicts a rendering of a simplified indicator dilution physical model, in accordance with aspects of the present disclosure
  • FIG. 5 depicts a flow chart illustrating a method to obtain a residual impulse function and corresponding hemodynamic parameters for a region of interest, in accordance with aspects of the present disclosure
  • FIG. 6 depicts a flow chart illustrating a method for training a deep learning (DL) model, in accordance with aspects of the present disclosure
  • FIG. 7 depicts a flow chart illustrating a method for using a deep learning model to predict an estimated residual impulse function, in accordance with aspects of the present disclosure
  • FIG. 8 depicts a flow chart illustrating a method for training the deep learning model of FIG. 7 , in accordance with aspects of the present disclosure.
  • FIG. 9 depicts a ground truth model that may be used for synthetic data generation in FIG. 6 , in accordance with aspects of the present disclosure.
  • Perfusion generally includes injection of a venous bolus of a contrast agent (e.g., a substance or composition that is used to enhance the visibility of a tissue (such as blood) or other media that might otherwise be difficult to observe in images generated using a given imaging modality) into a tissue and acquisition of multiple phases of the tissue after the bolus injection using an imaging modality, such as a computed tomography (CT) scanner.
  • a contrast agent e.g., a substance or composition that is used to enhance the visibility of a tissue (such as blood) or other media that might otherwise be difficult to observe in images generated using a given imaging modality
  • CT computed tomography
  • Computed tomography perfusion imaging allows absolute regional measurements of hemodynamic parameters (e.g., blood flow (BF), blood volume (BV), mean transit time (MTT), time to max (TMAX)).
  • BF blood flow
  • BV blood volume
  • MTT mean transit time
  • TMAX time to max
  • Color or other visually coded maps of these hemodynamic parameters may be produced for comparison against normal or baseline values for the individual (e.g., longitudinal studies) or for a relevant population or sub-population. Threshold values of these hemodynamic parameters may be established to monitor modifications in microcirculation, which may be used to characterize various pathologies, such as ischemia in organs (e.g., brain, myocardium, lung), tumors neo-vascularization state and changes, specific organs (e.g., liver, kidney, lung) characteristics, etc.
  • the computed tomography perfusion acquisition data may include three dimensional spatial data and one dimensional temporal data, which together constitute four dimensional (4D) computed tomography perfusion data.
  • Generation of hemodynamic parametric maps from the four dimensional (4D) computed tomography perfusion acquisition involves the use of a deconvolution algorithm to retrieve hemodynamic features from voxel-wise one dimensional (1D) temporal signal.
  • the present discussion is directed to using deep learning (DL) approaches to resolve the deconvolution algorithm and to generate the hemodynamic parametric maps or other comparable data outputs.
  • DL deep learning
  • CT computed tomography
  • US ultrasonography
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • MRI magnetic resonance imaging
  • CTA computed tomography angiography
  • imaging modalities such as X-ray computed tomography (e.g., multi-slice CT, helical CT, cone beam CT) and X-ray C-arm systems (e.g., cone beam imaging), measure projections of the object or patient being scanned where the projections, depending on the technique, correspond to Radon transform data, fan beam transform data, cone beam transform data, or non-uniform Fourier transforms.
  • the scan data may be magnetic resonance data (e.g., magnetic resonance imaging (MRI) data) generated in response to applied magnetic fields and RF pulses, and so forth.
  • MRI magnetic resonance imaging
  • single photon emission computed tomography (SPECT) and positron emission tomography (PET) may utilize a radiopharmaceutical that is administered to a patient and whose breakdown results in the positron emission of gamma rays from locations within the patient's body.
  • the radiopharmaceutical is typically selected so as to be preferentially or differentially distributed in the body based on the physiological or biochemical processes in the body.
  • a radiopharmaceutical may be selected that is preferentially processed or taken up by tumor tissue.
  • the radiopharmaceutical will typically be disposed in greater concentrations around tumor tissue within the patient.
  • an ultrasound imaging system may acquire ultrasound data of a patient.
  • the ultrasound system may be a digital acquisition and beam former system, but in other embodiments, the ultrasound system may be any suitable type of ultrasound system.
  • Such an ultrasound system may include the ultrasound probe and a workstation (e.g., monitor, console, user interface) which may control operation of the ultrasound probe and may process image data acquired by the ultrasound probe.
  • the ultrasound probe may be coupled to the workstation by any suitable technique for communicating image data and control signals between the ultrasound probe and the workstation such as a wireless, optical, coaxial, or other suitable connection.
  • Reconstruction routines and related correction and calibration routines are employed in conjunction with these imaging modalities to generate useful clinical images and/or data, which in turn may be used to derive or measure hemodynamic parameters of interest, such as by using deep learning (DL) techniques, as discussed herein.
  • DL deep learning
  • Deep learning (DL) approaches discussed herein may be based on artificial neural networks, and may therefore encompass one or more of deep neural networks, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, generative adversarial networks (GANs), and so forth.
  • deep learning techniques (which may also be known as deep machine learning, hierarchical learning, or deep structured learning) are a branch of machine learning techniques that employ mathematical representations of data and artificial neural networks for learning and processing such representations.
  • Neural networks may include multiple layers, such as input layers, hidden layers, and output layers. The basic unit of computation in a neural network is the neuron/node.
  • Each neuron/node receives inputs from some other nodes, or from an external source and computes outputs.
  • the input layer may include neurons/nodes to receive external inputs, such as input data.
  • Each hidden layer is made up of a set of neurons/nodes that have learnable weights and biases, and each neuron/node in the hidden layers may receive inputs from upstream connected nodes or layers and perform operations on the inputs to compute outputs that are provided to downstream connected nodes or layers.
  • the output layer may include neurons/nodes to receive inputs from the hidden layers and output results.
  • DL approaches may be characterized by their use of one or more algorithms to extract or model high level abstractions of a type of data of interest. This may be accomplished using one or more processing layers, with each layer typically corresponding to a different level of abstraction and, therefore potentially employing or utilizing different aspects of the initial data or outputs of a preceding layer (i.e., a hierarchy or cascade of layers) as the target of the processes or algorithms of a given layer. In an image processing or reconstruction context, this may be characterized as different layers corresponding to the different feature levels or resolution in the data. In general, the processing from one representation space to the next level representation space can be considered as one ‘stage’ of the process. Each stage of the process can be performed by separate neural networks or by different parts of one larger neural network.
  • the techniques discussed herein utilize deep learning (DL) approaches to estimate hemodynamic parameters from the computed tomography perfusion data.
  • deep learning algorithms are trained using synthetic (e.g., simulated) data generated based on clinical data as training data, as opposed to clinical, real world data or geometric constructs.
  • synthetic data for training one or more deep learning algorithms is in contrast to the direct use of clinical data for such training purposes, which may involve either estimation of the ground truth state or the acquisition of additional data that is representative of the ground truth state and the registration of the additional data to the clinical data to assemble the training data.
  • training data sets may be employed that have known initial values and known (i.e., ground truth) values for a final output of the deep learning process.
  • the ground truth training data may be used to train a network to provide the known correct outputs in response to the known inputs.
  • the synthetic data is used as training data, where the synthesized data is simulated or synthesized or derived from clinical data and/or simple geometric constructs, but is distinct from the clinical data.
  • the synthetic training data discussed herein are associated with known ground truth properties, without having to estimate or measure such ground truths or perform additional invasive operations to derive such ground truth properties.
  • the training of a single stage may have known input values corresponding to one representation space and known output values corresponding to a next level representation space.
  • the deep learning algorithms may process (either in a supervised or guided manner or in an unsupervised or unguided manner) the known or training data sets until the mathematical relationships between the initial data and desired output(s) are discerned and/or the mathematical relationships between the inputs and outputs of each layer are discerned and characterized.
  • separate validation data sets may be employed in which both the initial and desired target values are known, but only the initial values are supplied to the trained deep learning algorithms, and the outputs of the deep learning algorithm are compared to the desired target values to validate the prior training and/or to prevent over training.
  • FIG. 1 schematically depicts an example of an artificial neural network 50 that may be trained as a deep learning model as discussed herein.
  • the network 50 is multi-layered, with a training input 52 (e.g., synthetic data) and multiple layers including an input layer 54 , hidden layers 58 A, 58 B, and so forth, and an output layer 60 and the training target 64 present in the network 50 .
  • the input layer 54 may also be characterized as or understood to be a hidden layer.
  • Each layer in this example, is composed of a plurality of “neurons” or nodes 56 .
  • the number of neurons 56 may be constant between layers or, as depicted, may vary from layer to layer.
  • Neurons 56 at each layer generate respective outputs that serve as inputs to the neurons 56 of the next hierarchical layer.
  • a weighted sum of the inputs with an added bias is computed to “excite” or “activate” each respective neuron of the layers according to an activation function, such as rectified linear unit (ReLU), sigmoid function, hyperbolic tangent function, or otherwise specified or programmed function.
  • the outputs of the final layer constitute the network output 60 which, in conjunction with a target image or parameter set 64 , are used by loss or error function 62 to generate an error signal, which will be backpropagated to guide the network training.
  • the loss or error function 62 measures the difference between the network output and the training target.
  • the loss function may be the mean squared error (MSE) of the voxel level values or partial line integral values and/or may account for differences involving other image features, such as image gradients or other image statistics.
  • MSE mean squared error
  • the loss function 62 could be defined by other metrics associated with the particular task in question, such as a softmax function or DICE value (where DICE refers to the ratio
  • the present disclosure primarily discusses these approaches in the context of a CT or C-arm systems. However, it should be understood that the following discussion may also be applicable to other image modalities and systems including, but not limited to, multi-spectral CT and MRI, as well as to any context where tomographic reconstruction is employed to reconstruct an image from which hemodynamic parameters may be discerned and/or measured.
  • an imaging system 110 i.e., a scanner
  • the imaging system 110 is a computed tomography imaging system designed to acquire scan data (e.g., X-ray attenuation data) at a variety of radial views around a patient (or other subject or object of interest) and suitable for performing image reconstruction using tomographic reconstruction techniques.
  • scan data e.g., X-ray attenuation data
  • imaging system 110 includes a source of X-ray radiation 112 positioned adjacent to a collimator 114 .
  • the X-ray source 112 may be an X-ray tube, a distributed X-ray source (such as a solid-state or thermionic X-ray source) or any other source of X-ray radiation suitable for the acquisition of medical or other images.
  • MRI embodiments the measurements are samples in Fourier space and can either be applied directly as the input to the neural network or can first be converted to line integrals in sinogram space.
  • the collimator 114 shapes or limits a beam of X-rays 116 that passes into a region in which a patient/object 118 is positioned.
  • the X-rays 116 are collimated to be a cone shaped beam (i.e., a cone beam) or a fan shaped beam (i.e., a fan beam) that passes through the imaged volume.
  • a portion of the X-ray radiation 120 passes through or around the patient/object 118 (or other subject of interest) and impinges on a detector array, represented generally at reference numeral 122 .
  • Detector elements of the array produce electrical signals that represent the intensity of the incident X-rays 120 . These signals are acquired and processed to reconstruct images of the features within the patient/object 118 .
  • Source 112 is controlled by a system controller 124 , which furnishes both power, and control signals for computed tomography examination sequences.
  • the system controller 124 controls the source 112 via an X-ray controller 126 which may be a component of the system controller 124 .
  • the X-ray controller 126 may be configured to provide power and timing signals to the X-ray source 112 .
  • the detector 122 is coupled to the system controller 124 , which controls acquisition of the signals generated in the detector 122 .
  • the system controller 124 acquires the signals generated by the detector using a data acquisition system 128 .
  • the data acquisition system 128 receives data collected by readout electronics of the detector 122 .
  • the data acquisition system 128 may receive sampled analog signals from the detector 122 and convert the data to digital signals for subsequent processing by a processing component 130 discussed below.
  • the digital to analog (DAC) conversion may be performed by circuitry provided on the detector 122 itself.
  • the system controller 124 may also execute various signal processing and filtration functions with regard to the acquired signals, such as for initial adjustment of dynamic ranges, interleaving of digital data, and so forth.
  • system controller 124 is coupled to a rotational subsystem 132 and a linear positioning subsystem 134 .
  • the rotational subsystem 132 enables the X-ray source 112 , collimator 114 and the detector 122 to be rotated one or multiple turns around the patient/object 118 , such as rotated primarily in an x,y plane about the patient.
  • the rotational subsystem 132 might include a gantry or C-arm upon which the respective X-ray emission and detection components are disposed.
  • the system controller 124 may be utilized to operate the gantry or C-arm.
  • the linear positioning subsystem 134 may enable the patient/object 118 , or more specifically a table supporting the patient, to be displaced within the bore of the CT system 110 , such as in the z-direction relative to rotation of the gantry.
  • the table may be linearly moved (in a continuous or step-wise fashion) within the gantry to generate images of particular regions of interest of the patient 118 .
  • the system controller 124 controls the movement of the rotational subsystem 132 and/or the linear positioning subsystem 134 via a motor controller 136 .
  • system controller 124 commands operation of the imaging system 110 (such as via the operation of the source 112 , detector 122 , and positioning systems described above) to execute examination protocols, such as a computed tomography perfusion protocol, and to process acquired data.
  • the system controller 124 via the systems and controllers noted above, may rotate a gantry supporting the source 112 and detector 122 about a subject of interest so that X-ray attenuation data may be obtained at one or more angular positions relative to the subject.
  • system controller 124 may also include signal processing circuitry, associated memory circuitry for storing programs and routines executed by the computer (such as routines for performing vascular property estimation techniques described herein), as well as configuration parameters, image data, and so forth.
  • the signals acquired and processed by the system controller 124 are provided to a processing component 130 , which may perform image reconstruction.
  • the processing component 130 may be one or more general or application specific microprocessors.
  • the data collected by the data acquisition system 128 may be transmitted to the processing component 130 directly or after storage in a memory 138 .
  • Any type of memory suitable for storing data might be utilized by such an exemplary system 110 .
  • the memory 138 may include one or more optical, magnetic, and/or solid state memory storage structures.
  • the memory 138 may be located at the acquisition system site and/or may include remote storage devices for storing data, processing parameters, and/or routines for tomographic image reconstruction, as described below.
  • the processing component 130 may be configured to receive commands and scanning parameters from an operator via an operator workstation 140 , typically equipped with a keyboard and/or other input devices.
  • An operator may control the system 110 via the operator workstation 140 .
  • the operator may observe the reconstructed images and/or otherwise operate the system 110 using the operator workstation 140 .
  • a display 142 coupled to the operator workstation 140 may be utilized to observe the reconstructed images and to control imaging.
  • the images may also be printed by a printer 144 which may be coupled to the operator workstation 140 .
  • processing component 130 and operator workstation 140 may be coupled to other output devices, which may include standard or special purpose computer monitors and associated processing circuitry.
  • One or more operator workstations 140 may be further linked in the system for outputting system parameters, requesting examinations, viewing images, and so forth.
  • displays, printers, workstations, and similar devices supplied within the system may be local to the data acquisition components, or may be remote from these components, such as elsewhere within an institution or hospital, or in an entirely different location, linked to the image acquisition system via one or more configurable networks, such as the Internet, virtual private networks, and so forth.
  • the operator workstation 140 may also be coupled to a picture archiving and communications system (PACS) 146 .
  • PACS 146 may in turn be coupled to a remote client 148 , radiology department information system (RIS), hospital information system (HIS) or to an internal or external network, so that others at different locations may gain access to the raw or processed image data.
  • RIS radiology department information system
  • HIS hospital information system
  • a previously or recently acquired computed tomography perfusion image or image set may be subsequently accessed from such an archiving system for processing in accordance with the techniques discussed here for hemodynamic property estimation or longitudinal tracking.
  • the processing component 130 , memory 138 , and operator workstation 140 may be provided collectively as a general or special purpose computer or workstation configured to operate in accordance with the aspects of the present disclosure.
  • the general or special purpose computer may be provided as a separate component with respect to the data acquisition components of the system 110 or may be provided in a common platform with such components.
  • the system controller 124 may be provided as part of such a computer or workstation or as part of a separate system dedicated to image acquisition.
  • the system of FIG. 2 may be utilized to acquire X-ray projection data (or other scan data for other modalities) for a variety of views about a vascularized region of interest of a patient to reconstruct images (e.g., perfusion images or maps) of the imaged region using the scan data.
  • Projection (or other) data acquired by a system such as the imaging system 110 may be reconstructed as discussed herein to perform a tomographic reconstruction.
  • a rotational subsystem may encompass non-planar rotational aspects (e.g., complex rotational trajectories or other motion including motion in other dimensions so as not to be strictly rotational within a single plane), such as may be suitable for use with certain C-arm type imaging systems.
  • FIG. 3 is a block diagram showing a computing system 150 that may be used in the remote client 148 .
  • the computing system 150 may include a communication component 152 , a processor 154 , a memory 156 , a storage 158 , input/output (I/O) ports 160 , a display 162 , and the like.
  • the communication component 152 may be a wireless or wired communication component that may facilitate communication between the computing system 150 and various types of devices or resources (e.g., a database, a server) directly or via a network.
  • the communication component 152 may facilitate data transfer to the computing system 150 , such that the computing system 150 may receive data from the components depicted in FIG. 2 (e.g., the PACS 146 ), and the like.
  • the communication component 152 may use a variety of communication protocols, such as Open Database Connectivity (ODBC), TCP/IP Protocol, Distributed Relational Database Architecture (DRDA) protocol, Database Change Protocol (DCP), HTTP protocol, other suitable current or future protocols, or combinations thereof.
  • ODBC Open Database Connectivity
  • DRDA Distributed Relational Database Architecture
  • DCP Database Change Protocol
  • HTTP protocol other suitable current or future protocols, or combinations thereof.
  • the processor 154 may include single threaded processor(s), multi-threaded processor(s), or both.
  • the processor 154 may process instructions stored in the memory 156 .
  • the processor 154 may also include hardware based processor(s) each including one or more cores.
  • the processor 154 may include general purpose processor(s), special purpose processor(s), or both.
  • the processor 154 may be communicatively coupled to other internal components (such as the communication component 152 , the storage 158 , the I/O ports 160 , and the display 162 ).
  • the memory 156 and the storage 158 may be any suitable articles of manufacture that can serve as media to store processor executable code, data, or the like. These articles of manufacture may represent computer readable media (e.g., any suitable form of memory or storage) that may store the processor executable code used by the processor 154 to perform the presently disclosed techniques. As used herein, applications may include any suitable computer software or program that may be installed onto the computing system 150 and executed by the processor 154 .
  • the memory 156 and the storage 158 may represent non-transitory computer readable media (e.g., any suitable form of memory or storage) that may store the processor executable code used by the processor 154 to perform various techniques described herein. It should be noted that non-transitory merely indicates that the media is tangible and not a signal.
  • the computer system 150 may also include a predictive engine 164 , which may include a training component 166 and a predicting component 168 .
  • the training component 166 may receive the training data (e.g., synthetic data) stored in a database 170 and use the training data to train a machine learning model.
  • a deep learning (DL) model may be trained with a supervised or guided manner (e.g., trained with training data that includes input data and desired predictive output (e.g., labeled dataset)).
  • the deep learning model may also be trained with unsupervised or unguided manner (e.g., trained with training data that includes input data but without desired predictive output (e.g., unlabeled dataset)).
  • the predicting component 168 may use a set of machine learning models (e.g., functions, algorithms) trained by the training data to predict outputs (e.g., hemodynamic parameters) for initial values (e.g., clinical data) supplied to the predicting component 168 .
  • the predicted outputs may be supervised (e.g., by a user) to monitor or confirm the accuracy of the outputs, and the training data may be updated, which may be used by the training component 166 to retrain the machine learning model.
  • the predictive engine 164 and/or the database 170 may be located in a local environment of the remote client 148 or in a cloud computing environment (e.g., a data center).
  • computed tomography (CT) perfusion generally includes injection of a venous bolus of a contrast agent into a tissue and acquisition of multiple phases of the tissue after the bolus injection using a computed tomography (CT) scanner.
  • CT computed tomography
  • indicator dilution techniques have been used in physiological measurements.
  • FIG. 4 is a block diagram of a simplified indicator dilution physical model 200 used to illustrate the computed tomography perfusion process.
  • a constant flow F of liquid runs from an inflow 204 (e.g., artery) to an outflow 206 (e.g., vein) through an internal compartment B of the region of interest 202 .
  • the dilution of indicator in the inflow 204 (e.g., artery) is indicated by an artery signal C a (t)
  • the response of the region of interest 202 to a unitary pulse of tracer in the inflow 204 is indicated by a residual impulse function Q(t).
  • the response of the region of interest 202 to the artery signal C a (t) is indicated by a tissue signal C r (t) in the outflow 206 (e.g., vein).
  • the tissue signal C r (t) is a convolution of the artery signal C a (t) and the residual impulse function Q(t), as illustrated in Equation (1).
  • the residual impulse function Q(t) may be obtained by deconvolution of the tissue signal C r (t) based on Equation (1).
  • the residual impulse function Q(t) may be used to obtain hemodynamic parameters of the region of interest 202 , such as blood flow (BF), blood volume (BV), mean transit time (MTT), time to maximum (TMAX) etc.
  • the tissue blood flow (BF) corresponds to the blood flow entering/exiting a volume of tissue (e.g., expressed in ml/min/100 ml).
  • the blood volume (BV) corresponds to the volume of capillary blood contained in a certain volume of tissue (e.g., expressed in ml/100 ml or in %).
  • the MTT is the mean time taken by blood to pass through the capillary network (time between the arterial inflow and venous outflow) (expressed in second).
  • the artery signal C a (t) and the tissue signal C r (t) may be obtained from computed tomography scan acquisitions or measurements using other imaging modalities (e.g., MRI).
  • MRI computed tomography
  • a series of images may be acquired for the region of interest 202 , which may include images taken before the injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)), during the injection of the contrast agent, and after the injection of the contrast agent.
  • a contrast agent e.g., tracer bolus or marking blood with other way (e.g., ASL)
  • the computed tomography perfusion acquisition data may include three dimensional spatial data and one dimensional temporal data, which together constitute four dimensional (4D) computed tomography perfusion data.
  • the series of images may be used to study the microcirculation during the bolus injection of the contrast agent.
  • the images acquired before the injection of the contrast agent may be used as reference or baseline images, and the images acquired during and after the injection of the contrast agent may be used to study the effect of the injection relative to the reference or baseline. Accordingly, changes of the residual impulse function Q(t) and the tissue signal C r (t) due to the injection of the contrast agent may be obtained from the series of images acquired in the computed tomography acquisition.
  • FIG. 5 is a flow chart illustrating a method 220 to obtain a residual impulse function Q(t) and corresponding hemodynamic parameters for a region of interest.
  • a sequence of volume may be obtained by using a series of images acquired using various modalities (e.g., CT, MR, and so forth) for the region of interest.
  • the series of images may include images acquired before, during, and after intravenous injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)).
  • a contrast agent e.g., tracer bolus or marking blood with other way (e.g., ASL)
  • a sequential acquisition may be performed at the level of a slice or volume before, during, and after the injection of the contrast agent, and the images acquired before the start of the injection of the contrast agent may be used as reference images. These images may be segmented to produce a geometric representation of the true underlying lumen geometry. Segmentation of geometric features, such as plaque components, adjoining structures, and so forth, is envisioned.
  • These geometric representations can be voxelized (converted to or represented by volumetric representations where each voxel corresponds to a particular tissue type or combination of tissue types based on the voxel's location relative to the geometric representation), or characterized by polygonal surfaces, NURBS (non-uniform rational b-splines), or any number of other representations.
  • These representations may not exactly match the original shapes of the true lumen due to noise, resolution limits, and other image non-idealities, but they are sufficiently close that when taken together, a large series of these representations extracted from a large set of corresponding images may be representative of the geometric features commonly found in clinical practice.
  • FIG. 6 is a flow chart illustrating a method 260 for training a deep learning model for the deconvolution in the block 230 of FIG. 5 .
  • an artery signal C a (t) for an area of tissue may be obtained from the 4D computed tomography perfusion data.
  • a synthetic residual impulse function Q S (t) for the area of tissue may be generated using a ground truth model.
  • An area of tissue used to compute tissue signal may include many capillaries, and the capillary parameters may be described by a probability distribution. Models may be developed to take into account the distribution of capillaries and capillary parameters in tissue.
  • a set of residual impulse functions Q(t) may be generated and averaged to obtain the synthetic residue impulse function Q S (t).
  • a tissue signal C r (t) may be generated based on Equation (1) using the synthetic residual impulse function Q S (t) and an artery signal C a (t).
  • the tissue signal C r (t) may be generated using synthetic artery signal C a (t) data for which ground truth data is known.
  • the artery signal C a (t) may be based in part, or derived from, clinical image data (e.g., the 4D computed tomography perfusion data in block 262 ).
  • a perturbation e.g., additive noise
  • the generated synthetic residual impulse function Q S (t) and the tissue signal C r (t) may be stored (e.g., in the database 170 ) as training data.
  • the arterial signal C a (t) generated at the block 262 and the generated tissue signal C r (t) at the block 264 may be input into a deep learning model to calculate an estimated residual impulse function Q e (t) (e.g., using the predicting component 168 ), as illustrated in detail in FIG. 7 .
  • the estimated residual impulse function Q e (t) and the synthetic residual impulse function Q S (t) may be used to calculate a loss function, which may be backpropagated to the block 266 to guide the deep learning model training, as illustrated in detail in FIG. 8 .
  • the loss function may be used to calculate learnable weights and biases for the processing layers (e.g., hidden layers) in the deep learning model, and the blocks 266 and 268 may be repeated until the value of the loss function is less than a threshold.
  • FIG. 7 is a flow chart illustrating a method 320 for using a deep learning model 322 to predict an estimated residue impulse function Q e (t).
  • the arterial signal C a (t) and the tissue signal C r (t) obtained from the CT perfusion 4D data may be input into the network (e.g., input layers 54 of the network) of the deep learning model 322 at block 328 .
  • the deep learning model 322 may output parameters that could be transform to an estimated residual impulse function Q e (t).
  • hemodynamic parameters may be determined by using the estimated residual impulse function Q e (t) obtained at block 330 .
  • FIG. 7 multiple deep learning models may be used alone or together to predict the estimated residual impulse function Q e (t).
  • FIG. 8 is a flow chart illustrating a method 340 for training the deep learning model 322 using a loss function.
  • the arterial signal C a (t) obtained from the CT perfusion 4D data at block 262 and the synthetic tissue signal C r (t) obtained at block 264 may be input into the network (e.g., input layers 54 of the network) of the deep learning model 322 at block 328 .
  • the deep learning model 322 may output parameters that could be transform to an estimated residual impulse function Q e (t).
  • the estimated residual impulse function Q e (t) obtained by the deep learning model 322 at block 330 may be used to calculate various parameters (e.g., hemodynamic parameters) and derive various features, which may be compared with the corresponding parameters and features calculated or derived using the synthetic residual impulse function Q S (t), and the difference may be used to determine a loss function A, which may be used as a bias to train all or a part of the deep learning model 322 .
  • the loss function A may also include the mean squared error (MSE) of the voxel level values or partial line integral values and/or may account for differences involving other image features, such as image gradients or other image statistics.
  • MSE mean squared error
  • the estimated residual impulse function Q e (t) obtained by the deep learning model 322 at block 330 may be used to determine a regularization bias B, which may be related to the characteristics of the estimated residue impulse function Q e (t) (e.g., a second order of differentiation of Q e (t)).
  • a training weight a (e.g., any real number) may be determined for the loss function A and a training weight R (e.g., any real number) may be determined for the regularization bias B, and the weighted loss function A and the weighted regularization bias B may be backpropagated to the network (e.g., hidden layers 58 A, 58 B of the network) of the deep learning model 322 to guide the network training.
  • the blocks 328 , 330 , 342 , 344 , and 346 may be repeated until the value of the loss function is less than a threshold, and the deep learning training for the deep learning model 322 may be finished.
  • the clinical arterial signal C a (t) and tissue signal C r (t) obtained from the computed tomography perfusion 4D data may then be input into the trained deep learning model 322 to determine the estimate residual impulse function Q e (t).
  • the perturbation e.g., additive noise
  • the noises in the output of the trained deep learning model 322 due to image non-idealities in the 4D computed tomography perfusion data or registration errors that may occur during perfusion data acquisitions e.g., patient movements during acquisition
  • the clinical arterial signal C a (t) and tissue signal C r (t) may be sampled by multiphase images acquisition, and the samplings of them are not constrained to be equal or regular. In addition, support of samplings are not constrained to be same across application. A fix support may be used for input as minimum acquisition duration. Moreover, signals (e.g., C a (t), C r (t)) may be interpolated with a fixed step (e.g., ⁇ 0.5 s).
  • FIG. 9 shows an embodiment of a ground truth model 380 that may be used for synthetic data generation in FIG. 6 .
  • the value of the residue impulse function Q(t) is zero before time TO.
  • the residue impulse function Q(t) is equal to the relative flow F at time TO and decreases from the relative flow in extravascular FE after time T0+W, while W is the mean transit time (MTT).
  • the residue impulse function Q(t) is nonnegative during the acquisition.
  • the above characteristic of the ground truth model 380 may be used to determine the regularization bias B.
  • DL deep learning
  • 4D four dimensional
  • a computed tomography acquisition a series of images are acquired for a region of interest (e.g., a tissue), which include images taken before, during, and after an injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)) to the region of interest.
  • a contrast agent e.g., tracer bolus or marking blood with other way (e.g., ASL)
  • Deep learning algorithms are trained using synthetic (e.g., simulated) data generated based on the 4D computed tomography perfusion data to obtain a residual impulse function Q(t) of the region of interest.
  • the deep learning models may be trained to reduce/mitigate the image non-idealities in the 4D computed tomography perfusion data.
  • Neural networks trained in this manner are used to estimate the residual impulse function Q(t) of the region of interest, which is used to determine corresponding hemodynamic parameters of the region of interest, such as blood flow (BF), blood volume (BV), mean transit time (MTT), etc.
  • one or more neural networks are trained for hemodynamic parameters assessment using synthetic data for which ground truth data is known.
  • the synthetic data may be based in part, or derived from, clinical image data for which ground truth data is not known or available.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Computational Linguistics (AREA)
  • Fuzzy Systems (AREA)
  • Psychiatry (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Optics & Photonics (AREA)
  • Hematology (AREA)
  • Quality & Reliability (AREA)
  • Vascular Medicine (AREA)

Abstract

Methods and systems are described herein for hemodynamic parameter estimation. In certain embodiments, a set of perfusion data is acquired for a region of interest using an imaging system. An artery signal is obtained from the set of perfusion data. A tissue signal is obtained from the set of perfusion data. The artery signal and the tissue signal are provided as inputs to one or more neural networks to determine one or more hemodynamic parameters for the region of interest. The one or more neural networks are trained using one or more synthetic data.

Description

    BACKGROUND
  • The subject matter disclosed herein relates to the use of deep neural networks to obtain hemodynamic parameters by identifying a non-parametric model from computed tomography (CT) perfusion.
  • Non-invasive imaging technologies (e.g., computed tomography (CT), magnetic resonance imaging (MRI), ultrasonography (US), positron emission tomography (PET), single photon emission computed tomography (SPECT)) allow images of the internal structures or features of a patient or object to be obtained without performing an invasive procedure on the patient or object. In particular, such non-invasive imaging technologies rely on various physical principles (such as the differential transmission of X-rays through a target volume, the reflection of acoustic waves within the volume, the paramagnetic properties of different tissues and materials within the volume, the breakdown of targeted radionuclides within the body, and so forth) to acquire data and to construct images or otherwise represent the observed internal features of the patient/object.
  • By way of example, computed tomography (CT) perfusion is an imaging modality used to evaluate microcirculation in tissues. Computed tomography perfusion imaging allows determination of absolute regional measurements of hemodynamic parameters (e.g., blood flow (BF), blood volume (BV), mean transit time (MTT), time to maximum (TMAX)). Visually coded (e.g., color coded, gray scale, annotated, and so forth)) maps of these hemodynamic parameters (e.g., hemodynamic parametric maps) may be produced for comparison against normal values. Threshold or baseline values of these hemodynamic parameters may be established to monitor modifications in microcirculation, which may be used to characterize various pathologies, such as ischemia in organs (e.g., brain, myocardium, lung), tumors neo-vascularization state and changes, specific organs (e.g., liver, kidney, lung) characteristics, etc.
  • Generation of hemodynamic parametric maps from four dimensional (4D) computed tomography perfusion acquisition involves the use of a deconvolution algorithm to retrieve hemodynamic features from voxel-wise one dimensional (1D) temporal signals. However, current implementations (e.g., a parametric model of perfusion using a least squares (LSQ) regression) do not perform well and exhibit low signal to noise ratio. In addition, errors may be introduced by bad registration and/or by the superposition of the remains of a previous bolus. Hence there is a need for a better and more robust technique for generating and evaluating such hemodynamic maps and data.
  • SUMMARY
  • A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
  • As discussed herein, techniques are described that relate to utilizing deep learning (DL) approaches to estimate hemodynamic parameters from four dimensional (4D) computed tomography perfusion data. In one embodiment, during a computed tomography acquisition, a series of images are acquired for a region of interest (e.g., a tissue), which include images taken before, during, and after an injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)) to the region of interest. Deep learning algorithms are trained using synthetic (e.g., simulated) data, which is generated based on the 4D computed tomography perfusion data, to obtain a residual impulse function Q(t) of the region of interest. The deep learning algorithms are also trained to reduce/mitigate the image non-idealities in the 4D computed tomography perfusion data. Neural networks trained in this manner are used to estimate the residual impulse function Q(t) of the region of interest, which is used to determine corresponding hemodynamic parameters of the region of interest.
  • In one embodiment, a method is provided for calculating hemodynamic parameters. In accordance with this embodiment, a set of perfusion data is acquired for a region of interest using an imaging system. An artery signal is obtained from the set of perfusion data. A tissue signal is obtained from the set of perfusion data. The artery signal and the tissue signal are provided as inputs to one or more neural networks to determine one or more hemodynamic parameters for the region of interest. The one or more neural networks are trained using synthetic data.
  • In accordance with further aspects, in such a method the set of perfusion data may comprise computed tomography (CT) perfusion data. Alternatively, in the other embodiments, the set of perfusion data may comprise magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data. In the same or other embodiments, the one or more synthetic data are generated based on a defined ground truth model. In the same or other embodiments, the tissue signal is a convolution of the artery signal and a residual impulse function of the region of interest. In such an embodiment the one or more hemodynamic parameters may be determined from the residual impulse function. In the same or other embodiments, the one or more neural networks are trained to correct image non-idealities in the set of perfusion data. In the same or other embodiments, the one or more hemodynamic parameters comprise at least one of a blood flow (BF), a blood volume (BV), a mean transit time (MTT), or time to max (TMAX).
  • In a further embodiment, a system is provided. In accordance with this embodiment, the system comprises one or more processors and memory accessible by the one or more processors, and the memory stories instructions. The instructions, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving a set of perfusion data acquired using an imaging system to image a region of interest; obtaining an artery signal from the set of perfusion data; obtaining a tissue signal from the set of perfusion data; and providing the artery signal and the tissue signal to serve as inputs to one or more neural networks to determine one or more hemodynamic parameters for the region of interest, wherein the one or more neural networks are trained using one or more synthetic data.
  • In accordance with further aspects, in such a system implementation the set of perfusion data may comprise computed tomography (CT) perfusion data. Alternatively, in the other embodiments, the set of perfusion data may comprise magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data. In the same or other embodiments, the one or more synthetic data are generated based on a defined ground truth model. In the same or other embodiments, the tissue signal is a convolution of the artery signal and a residual impulse function of the region of interest. In such an embodiment the one or more hemodynamic parameters may be determined from the residual impulse function. In the same or other embodiments, the one or more neural networks are trained to correct image non-idealities in the set of perfusion data. In the same or other embodiments, the one or more hemodynamic parameters comprise at least one of a blood flow (BF), a blood volume (BV), a mean transit time (MTT), or a time to maximum (TMAX).
  • In an additional embodiment, a method is provided for training one or more neural networks. In accordance with this embodiment, a set of synthetic residual impulse functions for a region of interest is generated based on a defined ground truth model. An artery signal is obtained from a set of perfusion data. A synthetic tissue signal is generated based on the set of synthetic residual impulse function and the artery signal. The one or more neural networks are trained using a signal generated using the synthetic tissue signal and the artery signal.
  • In accordance with further aspects, in such a method the synthetic tissue signal may comprise a perturbation related to perturbating of the perfusion data. The perturbation may be associated with registration errors that may occur during perfusion data acquisitions (e.g., patient movements during acquisition) or may be associated with noise originating from the image acquisition technique being used (e.g., gaussian noise, artefacts, speckle noise, bolus superposition). In the same or other embodiments, the set of perfusion data may comprise computed tomography (CT) perfusion data. In the other embodiments, the set of perfusion data may comprise magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data. In the same or other embodiments, a loss is used as a bias for the training of the one or more neural networks. In such an implementation the loss is determined based on a comparison of a first set of parameters derived from an estimated residual impulse function output from the one or more neural networks and a second set of parameters derived from the synthetic residual impulse function. The first set of parameters may comprise a first set of hemodynamic parameters and the second set of parameters may comprise a second set of hemodynamic parameters In the same or other embodiments, the training of the one or more neural networks is determined to be finished when the loss is less than a threshold or after a number of epochs. In the same or other embodiments, a regularization is used as a bias for the training of the one or more neural networks. In such an implementation the regularization is associated with characteristics of an estimated residual impulse function output from the one or more neural networks. In the same or other embodiments, the regularization may be weighted by the deconvolution error.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
  • FIG. 1 depicts an example of an artificial neural network for training a deep learning model, in accordance with aspects of the present disclosure;
  • FIG. 2 is a block diagram depicting components of a computed tomography (CT) imaging system, in accordance with aspects of the present disclosure;
  • FIG. 3 depicts a block diagram of a computing system used to analyze images obtained from the computed tomography imaging system of FIG. 2 , in accordance with aspects of the present disclosure;
  • FIG. 4 depicts a rendering of a simplified indicator dilution physical model, in accordance with aspects of the present disclosure;
  • FIG. 5 depicts a flow chart illustrating a method to obtain a residual impulse function and corresponding hemodynamic parameters for a region of interest, in accordance with aspects of the present disclosure;
  • FIG. 6 depicts a flow chart illustrating a method for training a deep learning (DL) model, in accordance with aspects of the present disclosure;
  • FIG. 7 depicts a flow chart illustrating a method for using a deep learning model to predict an estimated residual impulse function, in accordance with aspects of the present disclosure;
  • FIG. 8 depicts a flow chart illustrating a method for training the deep learning model of FIG. 7 , in accordance with aspects of the present disclosure; and
  • FIG. 9 depicts a ground truth model that may be used for synthetic data generation in FIG. 6 , in accordance with aspects of the present disclosure.
  • DETAILED DESCRIPTION
  • One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation specific decisions must be made to achieve the developers' specific goals, such as compliance with system related and business related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
  • While aspects of the following discussion are provided in the context of medical imaging, it should be appreciated that aspects of the disclosed techniques may be applicable to other contexts, and are thus not limited to such medical examples. Indeed, the provision of examples and explanations in such a medical context is only to facilitate explanation by providing instances of real world implementations and applications, and should therefore not be interpreted as limiting the applicability of the present approaches with respect to other applicable uses, such as for other non-destructive and/or non-invasive imaging contexts.
  • As discussed herein, perfusion imaging is an imaging modality that is used to evaluate microcirculation in tissues. Microcirculation is associated with supplying oxygen and nutrients (including drugs and toxins), removal of CO2 and other metabolic waste products (e.g., catabolites, toxins), releasing and/or capturing mediators (e.g., hormones, neurotransmitters), producing the immune response and inflammation, regulating tissue fluids, temperature and core body temperature, controlling blood pressure, and so forth. Perfusion generally includes injection of a venous bolus of a contrast agent (e.g., a substance or composition that is used to enhance the visibility of a tissue (such as blood) or other media that might otherwise be difficult to observe in images generated using a given imaging modality) into a tissue and acquisition of multiple phases of the tissue after the bolus injection using an imaging modality, such as a computed tomography (CT) scanner. Computed tomography (CT) perfusion imaging allows absolute regional measurements of hemodynamic parameters (e.g., blood flow (BF), blood volume (BV), mean transit time (MTT), time to max (TMAX)). Color or other visually coded maps of these hemodynamic parameters (e.g., hemodynamic parametric maps) may be produced for comparison against normal or baseline values for the individual (e.g., longitudinal studies) or for a relevant population or sub-population. Threshold values of these hemodynamic parameters may be established to monitor modifications in microcirculation, which may be used to characterize various pathologies, such as ischemia in organs (e.g., brain, myocardium, lung), tumors neo-vascularization state and changes, specific organs (e.g., liver, kidney, lung) characteristics, etc. The computed tomography perfusion acquisition data may include three dimensional spatial data and one dimensional temporal data, which together constitute four dimensional (4D) computed tomography perfusion data. Generation of hemodynamic parametric maps from the four dimensional (4D) computed tomography perfusion acquisition involves the use of a deconvolution algorithm to retrieve hemodynamic features from voxel-wise one dimensional (1D) temporal signal. The present discussion is directed to using deep learning (DL) approaches to resolve the deconvolution algorithm and to generate the hemodynamic parametric maps or other comparable data outputs.
  • Although computed tomography (CT) examples are primarily provided herein, it should be understood that the disclosed technique may be used in other imaging modalities. For instance, the presently described approach may also be employed on data acquired by other types of tomographic scanners including, but not limited to, ultrasonography (US), positron emission tomography (PET), single photon emission computed tomography (SPECT), magnetic resonance imaging (MRI) scanners and/or other X-ray based imaging techniques, such as C-arm based techniques. The disclosed technique may also be used in the processing of computed tomography angiography (CTA) and perfusion combined acquisition.
  • By way of background, several imaging modalities, such as X-ray computed tomography (e.g., multi-slice CT, helical CT, cone beam CT) and X-ray C-arm systems (e.g., cone beam imaging), measure projections of the object or patient being scanned where the projections, depending on the technique, correspond to Radon transform data, fan beam transform data, cone beam transform data, or non-uniform Fourier transforms. In other contexts, the scan data may be magnetic resonance data (e.g., magnetic resonance imaging (MRI) data) generated in response to applied magnetic fields and RF pulses, and so forth.
  • In other contexts, single photon emission computed tomography (SPECT) and positron emission tomography (PET) may utilize a radiopharmaceutical that is administered to a patient and whose breakdown results in the positron emission of gamma rays from locations within the patient's body. The radiopharmaceutical is typically selected so as to be preferentially or differentially distributed in the body based on the physiological or biochemical processes in the body. For example, a radiopharmaceutical may be selected that is preferentially processed or taken up by tumor tissue. In such an example, the radiopharmaceutical will typically be disposed in greater concentrations around tumor tissue within the patient.
  • In other contexts, an ultrasound imaging system may acquire ultrasound data of a patient. In certain embodiments, the ultrasound system may be a digital acquisition and beam former system, but in other embodiments, the ultrasound system may be any suitable type of ultrasound system. Such an ultrasound system may include the ultrasound probe and a workstation (e.g., monitor, console, user interface) which may control operation of the ultrasound probe and may process image data acquired by the ultrasound probe. The ultrasound probe may be coupled to the workstation by any suitable technique for communicating image data and control signals between the ultrasound probe and the workstation such as a wireless, optical, coaxial, or other suitable connection.
  • Reconstruction routines and related correction and calibration routines are employed in conjunction with these imaging modalities to generate useful clinical images and/or data, which in turn may be used to derive or measure hemodynamic parameters of interest, such as by using deep learning (DL) techniques, as discussed herein.
  • Deep learning (DL) approaches discussed herein may be based on artificial neural networks, and may therefore encompass one or more of deep neural networks, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, generative adversarial networks (GANs), and so forth. As discussed herein, deep learning techniques (which may also be known as deep machine learning, hierarchical learning, or deep structured learning) are a branch of machine learning techniques that employ mathematical representations of data and artificial neural networks for learning and processing such representations. Neural networks may include multiple layers, such as input layers, hidden layers, and output layers. The basic unit of computation in a neural network is the neuron/node. Each neuron/node receives inputs from some other nodes, or from an external source and computes outputs. The input layer may include neurons/nodes to receive external inputs, such as input data. Each hidden layer is made up of a set of neurons/nodes that have learnable weights and biases, and each neuron/node in the hidden layers may receive inputs from upstream connected nodes or layers and perform operations on the inputs to compute outputs that are provided to downstream connected nodes or layers. The output layer may include neurons/nodes to receive inputs from the hidden layers and output results.
  • By way of example, deep learning (DL) approaches may be characterized by their use of one or more algorithms to extract or model high level abstractions of a type of data of interest. This may be accomplished using one or more processing layers, with each layer typically corresponding to a different level of abstraction and, therefore potentially employing or utilizing different aspects of the initial data or outputs of a preceding layer (i.e., a hierarchy or cascade of layers) as the target of the processes or algorithms of a given layer. In an image processing or reconstruction context, this may be characterized as different layers corresponding to the different feature levels or resolution in the data. In general, the processing from one representation space to the next level representation space can be considered as one ‘stage’ of the process. Each stage of the process can be performed by separate neural networks or by different parts of one larger neural network.
  • With this in mind, the techniques discussed herein utilize deep learning (DL) approaches to estimate hemodynamic parameters from the computed tomography perfusion data. In certain of the implementations discussed herein, deep learning algorithms are trained using synthetic (e.g., simulated) data generated based on clinical data as training data, as opposed to clinical, real world data or geometric constructs. The use of synthetic data for training one or more deep learning algorithms, as discussed herein, is in contrast to the direct use of clinical data for such training purposes, which may involve either estimation of the ground truth state or the acquisition of additional data that is representative of the ground truth state and the registration of the additional data to the clinical data to assemble the training data.
  • As discussed herein, as part of the initial training of deep learning processes to solve a particular problem, training data sets may be employed that have known initial values and known (i.e., ground truth) values for a final output of the deep learning process. In this manner, the ground truth training data may be used to train a network to provide the known correct outputs in response to the known inputs. As discussed in greater detail below, in accordance with the present approach, the synthetic data is used as training data, where the synthesized data is simulated or synthesized or derived from clinical data and/or simple geometric constructs, but is distinct from the clinical data. Further, due to their synthetic nature, the synthetic training data discussed herein are associated with known ground truth properties, without having to estimate or measure such ground truths or perform additional invasive operations to derive such ground truth properties.
  • For example, the training of a single stage may have known input values corresponding to one representation space and known output values corresponding to a next level representation space. In this manner, the deep learning algorithms may process (either in a supervised or guided manner or in an unsupervised or unguided manner) the known or training data sets until the mathematical relationships between the initial data and desired output(s) are discerned and/or the mathematical relationships between the inputs and outputs of each layer are discerned and characterized. Similarly, separate validation data sets may be employed in which both the initial and desired target values are known, but only the initial values are supplied to the trained deep learning algorithms, and the outputs of the deep learning algorithm are compared to the desired target values to validate the prior training and/or to prevent over training.
  • With the preceding in mind, FIG. 1 schematically depicts an example of an artificial neural network 50 that may be trained as a deep learning model as discussed herein. In this example, the network 50 is multi-layered, with a training input 52 (e.g., synthetic data) and multiple layers including an input layer 54, hidden layers 58A, 58B, and so forth, and an output layer 60 and the training target 64 present in the network 50. In certain implementations, the input layer 54 may also be characterized as or understood to be a hidden layer. Each layer, in this example, is composed of a plurality of “neurons” or nodes 56. The number of neurons 56 may be constant between layers or, as depicted, may vary from layer to layer. Neurons 56 at each layer generate respective outputs that serve as inputs to the neurons 56 of the next hierarchical layer. In practice, a weighted sum of the inputs with an added bias is computed to “excite” or “activate” each respective neuron of the layers according to an activation function, such as rectified linear unit (ReLU), sigmoid function, hyperbolic tangent function, or otherwise specified or programmed function. The outputs of the final layer constitute the network output 60 which, in conjunction with a target image or parameter set 64, are used by loss or error function 62 to generate an error signal, which will be backpropagated to guide the network training.
  • The loss or error function 62 measures the difference between the network output and the training target. In certain implementations, the loss function may be the mean squared error (MSE) of the voxel level values or partial line integral values and/or may account for differences involving other image features, such as image gradients or other image statistics. Alternatively, the loss function 62 could be defined by other metrics associated with the particular task in question, such as a softmax function or DICE value (where DICE refers to the ratio
  • 2 * ( A B ) "\[LeftBracketingBar]" A "\[RightBracketingBar]" + "\[LeftBracketingBar]" B "\[RightBracketingBar]" ,
  • with A ∩ B denoting the intersection of regions A and B, and |·| denoting the area of the region.)
  • To facilitate explanation of the present approach using deep learning techniques, the present disclosure primarily discusses these approaches in the context of a CT or C-arm systems. However, it should be understood that the following discussion may also be applicable to other image modalities and systems including, but not limited to, multi-spectral CT and MRI, as well as to any context where tomographic reconstruction is employed to reconstruct an image from which hemodynamic parameters may be discerned and/or measured.
  • With this in mind, an example of an imaging system 110 (i.e., a scanner) is depicted in FIG. 2 . In the depicted example, the imaging system 110 is a computed tomography imaging system designed to acquire scan data (e.g., X-ray attenuation data) at a variety of radial views around a patient (or other subject or object of interest) and suitable for performing image reconstruction using tomographic reconstruction techniques. In the embodiment illustrated in FIG. 2 , imaging system 110 includes a source of X-ray radiation 112 positioned adjacent to a collimator 114. The X-ray source 112 may be an X-ray tube, a distributed X-ray source (such as a solid-state or thermionic X-ray source) or any other source of X-ray radiation suitable for the acquisition of medical or other images. Conversely, MRI embodiments the measurements are samples in Fourier space and can either be applied directly as the input to the neural network or can first be converted to line integrals in sinogram space.
  • In the depicted example, the collimator 114 shapes or limits a beam of X-rays 116 that passes into a region in which a patient/object 118 is positioned. In the depicted example, the X-rays 116 are collimated to be a cone shaped beam (i.e., a cone beam) or a fan shaped beam (i.e., a fan beam) that passes through the imaged volume. A portion of the X-ray radiation 120 passes through or around the patient/object 118 (or other subject of interest) and impinges on a detector array, represented generally at reference numeral 122. Detector elements of the array produce electrical signals that represent the intensity of the incident X-rays 120. These signals are acquired and processed to reconstruct images of the features within the patient/object 118.
  • Source 112 is controlled by a system controller 124, which furnishes both power, and control signals for computed tomography examination sequences. In the depicted embodiment, the system controller 124 controls the source 112 via an X-ray controller 126 which may be a component of the system controller 124. In such an embodiment, the X-ray controller 126 may be configured to provide power and timing signals to the X-ray source 112.
  • Moreover, the detector 122 is coupled to the system controller 124, which controls acquisition of the signals generated in the detector 122. In the depicted embodiment, the system controller 124 acquires the signals generated by the detector using a data acquisition system 128. The data acquisition system 128 receives data collected by readout electronics of the detector 122. The data acquisition system 128 may receive sampled analog signals from the detector 122 and convert the data to digital signals for subsequent processing by a processing component 130 discussed below. Alternatively, in other embodiments, the digital to analog (DAC) conversion may be performed by circuitry provided on the detector 122 itself. The system controller 124 may also execute various signal processing and filtration functions with regard to the acquired signals, such as for initial adjustment of dynamic ranges, interleaving of digital data, and so forth.
  • In the embodiment illustrated in FIG. 2 , system controller 124 is coupled to a rotational subsystem 132 and a linear positioning subsystem 134. The rotational subsystem 132 enables the X-ray source 112, collimator 114 and the detector 122 to be rotated one or multiple turns around the patient/object 118, such as rotated primarily in an x,y plane about the patient. It should be noted that the rotational subsystem 132 might include a gantry or C-arm upon which the respective X-ray emission and detection components are disposed. Thus, in such an embodiment, the system controller 124 may be utilized to operate the gantry or C-arm.
  • The linear positioning subsystem 134 may enable the patient/object 118, or more specifically a table supporting the patient, to be displaced within the bore of the CT system 110, such as in the z-direction relative to rotation of the gantry. Thus, the table may be linearly moved (in a continuous or step-wise fashion) within the gantry to generate images of particular regions of interest of the patient 118. In the depicted embodiment, the system controller 124 controls the movement of the rotational subsystem 132 and/or the linear positioning subsystem 134 via a motor controller 136.
  • In general, system controller 124 commands operation of the imaging system 110 (such as via the operation of the source 112, detector 122, and positioning systems described above) to execute examination protocols, such as a computed tomography perfusion protocol, and to process acquired data. For example, the system controller 124, via the systems and controllers noted above, may rotate a gantry supporting the source 112 and detector 122 about a subject of interest so that X-ray attenuation data may be obtained at one or more angular positions relative to the subject. In the present context, system controller 124 may also include signal processing circuitry, associated memory circuitry for storing programs and routines executed by the computer (such as routines for performing vascular property estimation techniques described herein), as well as configuration parameters, image data, and so forth.
  • In the depicted embodiment, the signals acquired and processed by the system controller 124 are provided to a processing component 130, which may perform image reconstruction. The processing component 130 may be one or more general or application specific microprocessors. The data collected by the data acquisition system 128 may be transmitted to the processing component 130 directly or after storage in a memory 138. Any type of memory suitable for storing data might be utilized by such an exemplary system 110. For example, the memory 138 may include one or more optical, magnetic, and/or solid state memory storage structures. Moreover, the memory 138 may be located at the acquisition system site and/or may include remote storage devices for storing data, processing parameters, and/or routines for tomographic image reconstruction, as described below.
  • The processing component 130 may be configured to receive commands and scanning parameters from an operator via an operator workstation 140, typically equipped with a keyboard and/or other input devices. An operator may control the system 110 via the operator workstation 140. Thus, the operator may observe the reconstructed images and/or otherwise operate the system 110 using the operator workstation 140. For example, a display 142 coupled to the operator workstation 140 may be utilized to observe the reconstructed images and to control imaging. Additionally, the images may also be printed by a printer 144 which may be coupled to the operator workstation 140.
  • Further, the processing component 130 and operator workstation 140 may be coupled to other output devices, which may include standard or special purpose computer monitors and associated processing circuitry. One or more operator workstations 140 may be further linked in the system for outputting system parameters, requesting examinations, viewing images, and so forth. In general, displays, printers, workstations, and similar devices supplied within the system may be local to the data acquisition components, or may be remote from these components, such as elsewhere within an institution or hospital, or in an entirely different location, linked to the image acquisition system via one or more configurable networks, such as the Internet, virtual private networks, and so forth.
  • It should be further noted that the operator workstation 140 may also be coupled to a picture archiving and communications system (PACS) 146. PACS 146 may in turn be coupled to a remote client 148, radiology department information system (RIS), hospital information system (HIS) or to an internal or external network, so that others at different locations may gain access to the raw or processed image data. By way of example, in the present context a previously or recently acquired computed tomography perfusion image or image set may be subsequently accessed from such an archiving system for processing in accordance with the techniques discussed here for hemodynamic property estimation or longitudinal tracking.
  • While the preceding discussion has treated the various exemplary components of the imaging system 110 separately, these various components may be provided within a common platform or in interconnected platforms. For example, the processing component 130, memory 138, and operator workstation 140 may be provided collectively as a general or special purpose computer or workstation configured to operate in accordance with the aspects of the present disclosure. In such embodiments, the general or special purpose computer may be provided as a separate component with respect to the data acquisition components of the system 110 or may be provided in a common platform with such components. Likewise, the system controller 124 may be provided as part of such a computer or workstation or as part of a separate system dedicated to image acquisition.
  • The system of FIG. 2 may be utilized to acquire X-ray projection data (or other scan data for other modalities) for a variety of views about a vascularized region of interest of a patient to reconstruct images (e.g., perfusion images or maps) of the imaged region using the scan data. Projection (or other) data acquired by a system such as the imaging system 110 may be reconstructed as discussed herein to perform a tomographic reconstruction. Although the system of FIG. 2 shows a rotational subsystem 132 for rotating the X-ray source 112 and detector 122 about an object or subject, such a rotational subsystem may encompass non-planar rotational aspects (e.g., complex rotational trajectories or other motion including motion in other dimensions so as not to be strictly rotational within a single plane), such as may be suitable for use with certain C-arm type imaging systems.
  • FIG. 3 is a block diagram showing a computing system 150 that may be used in the remote client 148. Although the following description details some example components that make up the computing system 150, it should be understood that the computing system 150 may include additional or fewer components. The computing system 150 may include a communication component 152, a processor 154, a memory 156, a storage 158, input/output (I/O) ports 160, a display 162, and the like. The communication component 152 may be a wireless or wired communication component that may facilitate communication between the computing system 150 and various types of devices or resources (e.g., a database, a server) directly or via a network. Additionally, the communication component 152 may facilitate data transfer to the computing system 150, such that the computing system 150 may receive data from the components depicted in FIG. 2 (e.g., the PACS 146), and the like. The communication component 152 may use a variety of communication protocols, such as Open Database Connectivity (ODBC), TCP/IP Protocol, Distributed Relational Database Architecture (DRDA) protocol, Database Change Protocol (DCP), HTTP protocol, other suitable current or future protocols, or combinations thereof.
  • The processor 154 may include single threaded processor(s), multi-threaded processor(s), or both. The processor 154 may process instructions stored in the memory 156. The processor 154 may also include hardware based processor(s) each including one or more cores. The processor 154 may include general purpose processor(s), special purpose processor(s), or both. The processor 154 may be communicatively coupled to other internal components (such as the communication component 152, the storage 158, the I/O ports 160, and the display 162).
  • The memory 156 and the storage 158 may be any suitable articles of manufacture that can serve as media to store processor executable code, data, or the like. These articles of manufacture may represent computer readable media (e.g., any suitable form of memory or storage) that may store the processor executable code used by the processor 154 to perform the presently disclosed techniques. As used herein, applications may include any suitable computer software or program that may be installed onto the computing system 150 and executed by the processor 154. The memory 156 and the storage 158 may represent non-transitory computer readable media (e.g., any suitable form of memory or storage) that may store the processor executable code used by the processor 154 to perform various techniques described herein. It should be noted that non-transitory merely indicates that the media is tangible and not a signal.
  • The I/O ports 160 may be interfaces that may couple to other peripheral components such as input devices (e.g., keyboard, mouse), sensors, input/output (I/O) modules, and the like. The display 162 may operate as a human machine interface (HMI) to depict visualizations associated with software or executable code being processed by the processor 154. The display 162 may operate to depict a representation of the three dimensional (3D) augmented reality (AR) or virtual reality (VR) visualizations associated with software or executable code being processed by the processor 154. In one embodiment, the display 162 may be a touch display capable of receiving inputs from an operator of the computing system 150. The display 162 may be any suitable type of display, such as a liquid crystal display (LCD), plasma display, or an organic light emitting diode (OLED) display, for example. Additionally, in one embodiment, the display 162 may be provided in conjunction with a touch sensitive mechanism (e.g., a touch screen) that may function as part of a control interface for the computing system 150.
  • The computer system 150 may also include a predictive engine 164, which may include a training component 166 and a predicting component 168. The training component 166 may receive the training data (e.g., synthetic data) stored in a database 170 and use the training data to train a machine learning model. For example, a deep learning (DL) model may be trained with a supervised or guided manner (e.g., trained with training data that includes input data and desired predictive output (e.g., labeled dataset)). The deep learning model may also be trained with unsupervised or unguided manner (e.g., trained with training data that includes input data but without desired predictive output (e.g., unlabeled dataset)). The predicting component 168 may use a set of machine learning models (e.g., functions, algorithms) trained by the training data to predict outputs (e.g., hemodynamic parameters) for initial values (e.g., clinical data) supplied to the predicting component 168. In some embodiments, the predicted outputs may be supervised (e.g., by a user) to monitor or confirm the accuracy of the outputs, and the training data may be updated, which may be used by the training component 166 to retrain the machine learning model. The predictive engine 164 and/or the database 170 may be located in a local environment of the remote client 148 or in a cloud computing environment (e.g., a data center).
  • With the preceding background and context discussion in mind, the present disclosure relates to using deep learning approaches to estimate hemodynamic parameters from 4D computed tomography perfusion data. As mentioned previously, computed tomography (CT) perfusion generally includes injection of a venous bolus of a contrast agent into a tissue and acquisition of multiple phases of the tissue after the bolus injection using a computed tomography (CT) scanner. To measure the response of the tissue after the bolus injection, indicator dilution techniques have been used in physiological measurements. FIG. 4 is a block diagram of a simplified indicator dilution physical model 200 used to illustrate the computed tomography perfusion process. In the simplified indicator dilution physical model 200, a constant flow F of liquid runs from an inflow 204 (e.g., artery) to an outflow 206 (e.g., vein) through an internal compartment B of the region of interest 202. The dilution of indicator in the inflow 204 (e.g., artery) is indicated by an artery signal Ca(t), and the response of the region of interest 202 to a unitary pulse of tracer in the inflow 204 is indicated by a residual impulse function Q(t). The response of the region of interest 202 to the artery signal Ca(t) is indicated by a tissue signal Cr(t) in the outflow 206 (e.g., vein). The tissue signal Cr(t) is a convolution of the artery signal Ca(t) and the residual impulse function Q(t), as illustrated in Equation (1).
  • C r ( t ) = C a ( t ) Q ( t ) ( 1 )
  • Accordingly, the residual impulse function Q(t) may be obtained by deconvolution of the tissue signal Cr(t) based on Equation (1). The residual impulse function Q(t) may be used to obtain hemodynamic parameters of the region of interest 202, such as blood flow (BF), blood volume (BV), mean transit time (MTT), time to maximum (TMAX) etc. The tissue blood flow (BF) corresponds to the blood flow entering/exiting a volume of tissue (e.g., expressed in ml/min/100 ml). The blood volume (BV) corresponds to the volume of capillary blood contained in a certain volume of tissue (e.g., expressed in ml/100 ml or in %). The MTT is the mean time taken by blood to pass through the capillary network (time between the arterial inflow and venous outflow) (expressed in second). The artery signal Ca(t) and the tissue signal Cr(t) may be obtained from computed tomography scan acquisitions or measurements using other imaging modalities (e.g., MRI). For example, during a computed tomography acquisition, a series of images may be acquired for the region of interest 202, which may include images taken before the injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)), during the injection of the contrast agent, and after the injection of the contrast agent. Thus the computed tomography perfusion acquisition data may include three dimensional spatial data and one dimensional temporal data, which together constitute four dimensional (4D) computed tomography perfusion data. The series of images may be used to study the microcirculation during the bolus injection of the contrast agent. For example, the images acquired before the injection of the contrast agent may be used as reference or baseline images, and the images acquired during and after the injection of the contrast agent may be used to study the effect of the injection relative to the reference or baseline. Accordingly, changes of the residual impulse function Q(t) and the tissue signal Cr(t) due to the injection of the contrast agent may be obtained from the series of images acquired in the computed tomography acquisition.
  • FIG. 5 is a flow chart illustrating a method 220 to obtain a residual impulse function Q(t) and corresponding hemodynamic parameters for a region of interest. At block 222, a sequence of volume may be obtained by using a series of images acquired using various modalities (e.g., CT, MR, and so forth) for the region of interest. As mentioned previously, the series of images may include images acquired before, during, and after intravenous injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)). For example, a sequential acquisition may be performed at the level of a slice or volume before, during, and after the injection of the contrast agent, and the images acquired before the start of the injection of the contrast agent may be used as reference images. These images may be segmented to produce a geometric representation of the true underlying lumen geometry. Segmentation of geometric features, such as plaque components, adjoining structures, and so forth, is envisioned. These geometric representations can be voxelized (converted to or represented by volumetric representations where each voxel corresponds to a particular tissue type or combination of tissue types based on the voxel's location relative to the geometric representation), or characterized by polygonal surfaces, NURBS (non-uniform rational b-splines), or any number of other representations. These representations may not exactly match the original shapes of the true lumen due to noise, resolution limits, and other image non-idealities, but they are sufficiently close that when taken together, a large series of these representations extracted from a large set of corresponding images may be representative of the geometric features commonly found in clinical practice. At block 224, tissue signals may be obtained for all voxels of the sequence of volume using the volumetric representations produced in block 222. At block 226, a sample tissue time signal may be obtained for a voxel or a small space area in the sequence of volume. At block 228, an arterial signal may be obtained from the series of images acquired at block 222. Then at block 230, a residual impulse function Q(t) of the voxel or the small space area may be obtained by deconvolution of the sample tissue time signal against the arterial signal based on Equation (1). At block 232, the result of the deconvolution may be used to obtain values of parametric maps in the corresponding voxel/small space area.
  • FIG. 6 is a flow chart illustrating a method 260 for training a deep learning model for the deconvolution in the block 230 of FIG. 5 . At block 262, an artery signal Ca(t) for an area of tissue may be obtained from the 4D computed tomography perfusion data. At block 264, a synthetic residual impulse function QS(t) for the area of tissue may be generated using a ground truth model. An area of tissue used to compute tissue signal may include many capillaries, and the capillary parameters may be described by a probability distribution. Models may be developed to take into account the distribution of capillaries and capillary parameters in tissue. For a defined model with known probability distributions of parameters, a set of residual impulse functions Q(t) may be generated and averaged to obtain the synthetic residue impulse function QS(t). Then, a tissue signal Cr(t) may be generated based on Equation (1) using the synthetic residual impulse function QS(t) and an artery signal Ca(t). In certain implementations, the tissue signal Cr(t) may be generated using synthetic artery signal Ca(t) data for which ground truth data is known. In certain implementations, the artery signal Ca(t) may be based in part, or derived from, clinical image data (e.g., the 4D computed tomography perfusion data in block 262). In some embodiments, a perturbation (e.g., additive noise) may be added to the tissue signal Cr(t) for the noise, resolution limits, and other image non-idealities in the 4D computed tomography perfusion data, as well as registration errors that may occur during perfusion data acquisitions (e.g., patient movements during acquisition), etc. The generated synthetic residual impulse function QS(t) and the tissue signal Cr(t) may be stored (e.g., in the database 170) as training data. At block 266, the arterial signal Ca(t) generated at the block 262 and the generated tissue signal Cr(t) at the block 264 may be input into a deep learning model to calculate an estimated residual impulse function Qe(t) (e.g., using the predicting component 168), as illustrated in detail in FIG. 7 . At block 268, the estimated residual impulse function Qe(t) and the synthetic residual impulse function QS(t) may be used to calculate a loss function, which may be backpropagated to the block 266 to guide the deep learning model training, as illustrated in detail in FIG. 8 . For example, the loss function may be used to calculate learnable weights and biases for the processing layers (e.g., hidden layers) in the deep learning model, and the blocks 266 and 268 may be repeated until the value of the loss function is less than a threshold.
  • FIG. 7 is a flow chart illustrating a method 320 for using a deep learning model 322 to predict an estimated residue impulse function Qe(t). At block 324, the arterial signal Ca(t) and the tissue signal Cr(t) obtained from the CT perfusion 4D data may be input into the network (e.g., input layers 54 of the network) of the deep learning model 322 at block 328. At block 330, the deep learning model 322 may output parameters that could be transform to an estimated residual impulse function Qe(t). At block 332, hemodynamic parameters may be determined by using the estimated residual impulse function Qe(t) obtained at block 330. Although one deep learning model is illustrated in FIG. 7 , multiple deep learning models may be used alone or together to predict the estimated residual impulse function Qe(t).
  • FIG. 8 is a flow chart illustrating a method 340 for training the deep learning model 322 using a loss function. The arterial signal Ca(t) obtained from the CT perfusion 4D data at block 262 and the synthetic tissue signal Cr(t) obtained at block 264 may be input into the network (e.g., input layers 54 of the network) of the deep learning model 322 at block 328. At block 330, the deep learning model 322 may output parameters that could be transform to an estimated residual impulse function Qe(t). At block 342, the estimated residual impulse function Qe(t) obtained by the deep learning model 322 at block 330 may be used to calculate various parameters (e.g., hemodynamic parameters) and derive various features, which may be compared with the corresponding parameters and features calculated or derived using the synthetic residual impulse function QS(t), and the difference may be used to determine a loss function A, which may be used as a bias to train all or a part of the deep learning model 322. The loss function A may also include the mean squared error (MSE) of the voxel level values or partial line integral values and/or may account for differences involving other image features, such as image gradients or other image statistics. At block 344, the estimated residual impulse function Qe(t) obtained by the deep learning model 322 at block 330 may be used to determine a regularization bias B, which may be related to the characteristics of the estimated residue impulse function Qe(t) (e.g., a second order of differentiation of Qe(t)). At block 346, a training weight a (e.g., any real number) may be determined for the loss function A and a training weight R (e.g., any real number) may be determined for the regularization bias B, and the weighted loss function A and the weighted regularization bias B may be backpropagated to the network (e.g., hidden layers 58A, 58B of the network) of the deep learning model 322 to guide the network training. The blocks 328, 330, 342, 344, and 346 may be repeated until the value of the loss function is less than a threshold, and the deep learning training for the deep learning model 322 may be finished. The clinical arterial signal Ca(t) and tissue signal Cr(t) obtained from the computed tomography perfusion 4D data may then be input into the trained deep learning model 322 to determine the estimate residual impulse function Qe(t). In these embodiments that the perturbation (e.g., additive noise) is added to the tissue signal Cr(t), noises in the output of the trained deep learning model 322 due to image non-idealities in the 4D computed tomography perfusion data or registration errors that may occur during perfusion data acquisitions (e.g., patient movements during acquisition) may be reduced/mitigated since the deep learning model 322 has been trained against the noise, resolution limits, and other image non-idealities in the 4D computed tomography perfusion data as well as the registration errors by adding the perturbation into the training data. The clinical arterial signal Ca(t) and tissue signal Cr(t) may be sampled by multiphase images acquisition, and the samplings of them are not constrained to be equal or regular. In addition, support of samplings are not constrained to be same across application. A fix support may be used for input as minimum acquisition duration. Moreover, signals (e.g., Ca(t), Cr(t)) may be interpolated with a fixed step (e.g., ≤0.5 s).
  • FIG. 9 shows an embodiment of a ground truth model 380 that may be used for synthetic data generation in FIG. 6 . In the ground truth model 380, the value of the residue impulse function Q(t) is zero before time TO. The residue impulse function Q(t) is equal to the relative flow F at time TO and decreases from the relative flow in extravascular FE after time T0+W, while W is the mean transit time (MTT). In addition, the residue impulse function Q(t) is nonnegative during the acquisition. The above characteristic of the ground truth model 380 may be used to determine the regularization bias B.
  • Technical effects of the invention include utilizing deep learning (DL) approaches to estimate hemodynamic parameters from four dimensional (4D) computed tomography perfusion data. During a computed tomography acquisition, a series of images are acquired for a region of interest (e.g., a tissue), which include images taken before, during, and after an injection of a contrast agent (e.g., tracer bolus or marking blood with other way (e.g., ASL)) to the region of interest. Deep learning algorithms are trained using synthetic (e.g., simulated) data generated based on the 4D computed tomography perfusion data to obtain a residual impulse function Q(t) of the region of interest. In addition, the deep learning models may be trained to reduce/mitigate the image non-idealities in the 4D computed tomography perfusion data. Neural networks trained in this manner are used to estimate the residual impulse function Q(t) of the region of interest, which is used to determine corresponding hemodynamic parameters of the region of interest, such as blood flow (BF), blood volume (BV), mean transit time (MTT), etc. In certain implementations, one or more neural networks are trained for hemodynamic parameters assessment using synthetic data for which ground truth data is known. In certain implementations, the synthetic data may be based in part, or derived from, clinical image data for which ground truth data is not known or available.
  • This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. However, it should be understood that the present disclosure is not intended to be limited to the particular forms disclosed. Rather, the present disclosure is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the following appended claims. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
  • The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible, or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).

Claims (20)

1. A method, comprising:
acquiring a set of perfusion data for a region of interest using an imaging system;
obtaining an artery signal from the set of perfusion data;
obtaining a tissue signal from the set of perfusion data; and
providing the artery signal and the tissue signal to serve as inputs to one or more neural networks to determine one or more hemodynamic parameters for the region of interest, wherein the one or more neural networks are trained using one or more clinical perfusion data and one or more synthetic data.
2. The method of claim 1, wherein the set of perfusion data comprises at least one of computed tomography (CT) perfusion data, magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data.
3. The method of claim 1, wherein the one or more synthetic data are generated based on a defined ground truth model.
4. The method of claim 1, wherein the tissue signal is a convolution of the artery signal and a residual impulse function of the region of interest, and wherein the one or more hemodynamic parameters are determined from the residual impulse function.
5. The method of claim 1, comprising correcting non-idealities in the set of perfusion data based on output from the one or more neural networks.
6. The method of claim 1, wherein the one or more hemodynamic parameters comprise at least one of a blood flow (BF), a blood volume (BV), a mean transit time (MTT), or a time to maximum (TMAX).
7. A system comprising:
one or more processors; and
memory, accessible by the one or more processors, the memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
receiving a set of perfusion data acquired using an imaging system to image a region of interest;
obtaining an artery signal from the set of perfusion data;
obtaining a tissue signal from the set of perfusion data; and
providing the artery signal and the tissue signal to serve as inputs to one or more neural networks to determine one or more hemodynamic parameters for the region of interest, wherein the one or more neural networks are trained using one or more clinical perfusion data and one or more synthetic data.
8. The system of claim 7, wherein the set of perfusion data comprises at least one of computed tomography (CT) perfusion data, magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data.
9. The system of claim 7, wherein the one or more synthetic data are generated based on a defined ground truth model.
10. The system of claim 7, wherein the tissue signal is a convolution of the artery signal and a residual impulse function of the region of interest, and wherein the one or more hemodynamic parameters are determined from the residual impulse function.
11. The system of claim 7, wherein the one or more neural networks are trained to correct image non-idealities in the set of perfusion data.
12. The system of claim 7, wherein the one or more hemodynamic parameters comprise at least one of a blood flow (BF), a blood volume (BV), a mean transit time (MTT), or a time to maximum (TMAX).
13. A method for training one or more neural networks, comprising:
generating a set of synthetic residual impulse functions for a region of interest based on a defined ground truth model;
obtaining an artery signal from a set of perfusion data;
generating a synthetic tissue signal based on the set of synthetic residual impulse function and the artery signal; and
training the one or more neural networks using a signal generated using the synthetic tissue signal and the artery signal.
14. The method of claim 13, wherein the synthetic tissue signal comprises a perturbation related to perturbating of the perfusion data.
15. The method of claim 14, wherein the perturbation is associated with registration errors or with acquisition errors.
16. The method of claim 13, wherein the set of perfusion data comprises at least one of computed tomography (CT) perfusion data, magnetic resonance imaging (MRI) perfusion data, positron emission tomography (PET) perfusion data, single photon emission computed tomography (SPECT) data, or ultrasound imaging data.
17. The method of claim 13, wherein a loss is used as a bias for the training of the one or more neural networks, and wherein the loss is determined based on a comparison of an estimated residual impulse function output from the one or more neural networks and a first set of parameters derived from the estimated residual impulse function with the set of synthetic residual impulse functions and a second set of parameters derived from the set of synthetic residual impulse functions.
18. The method of claim 17, wherein the first set of parameters comprises a first set of hemodynamic parameters and the second set of parameters comprises a second set of hemodynamic parameters.
19. The method of claim 13, wherein a regularization is used in a bias used for the training of the one or more neural networks, and wherein the regularization is associated with characteristics of an estimated residual impulse function output from the one or more neural networks.
20. The method of claim 19, wherein the regularization comprises a second order of differentiation of the estimated residual impulse function.
US18/408,135 2024-01-09 2024-01-09 Method and system to compute hemodynamic parameters Pending US20250221670A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US18/408,135 US20250221670A1 (en) 2024-01-09 2024-01-09 Method and system to compute hemodynamic parameters
EP24221472.4A EP4585158A1 (en) 2024-01-09 2024-12-19 Method and system to compute hemodynamic parameters
CN202411952056.7A CN120298293A (en) 2024-01-09 2024-12-27 Method and system for calculating hemodynamic parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US18/408,135 US20250221670A1 (en) 2024-01-09 2024-01-09 Method and system to compute hemodynamic parameters

Publications (1)

Publication Number Publication Date
US20250221670A1 true US20250221670A1 (en) 2025-07-10

Family

ID=93926742

Family Applications (1)

Application Number Title Priority Date Filing Date
US18/408,135 Pending US20250221670A1 (en) 2024-01-09 2024-01-09 Method and system to compute hemodynamic parameters

Country Status (3)

Country Link
US (1) US20250221670A1 (en)
EP (1) EP4585158A1 (en)
CN (1) CN120298293A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110229003A1 (en) * 2008-11-14 2011-09-22 Apollo Medical Imaging Technology Pty Ltd Method and system for mapping tissue status of acute stroke
US20120141005A1 (en) * 2009-06-05 2012-06-07 Faycal Djeridane Method for estimating haeomodynamic parameters by joint estimation of the parameters of a global perfusion model
US8908939B2 (en) * 2008-09-30 2014-12-09 Koninklijke Philips N.V. Perfusion imaging
US20150272448A1 (en) * 2014-03-31 2015-10-01 Heartflow, Inc. Systems and methods for determining blood flow characteristics using flow ratio
US20160148372A1 (en) * 2014-11-24 2016-05-26 Siemens Healthcare Gmbh Synthetic data-driven hemodynamic determination in medical imaging
US20190150764A1 (en) * 2016-05-02 2019-05-23 The Regents Of The University Of California System and Method for Estimating Perfusion Parameters Using Medical Imaging
US10964017B2 (en) * 2018-11-15 2021-03-30 General Electric Company Deep learning for arterial analysis and assessment
US20210133960A1 (en) * 2019-11-01 2021-05-06 GE Precision Healthcare LLC Methods and systems for an adaptive five-zone perfusion scan
US20240193767A1 (en) * 2022-09-06 2024-06-13 Jubilant Draximage Inc. Method for obtaining arterial input function from region of interest
US20240221151A1 (en) * 2020-05-13 2024-07-04 Icometrix Nv Computer-implemented method, system and computer program product for determining a vascular function of a perfusion imaging sequence
US12288328B2 (en) * 2020-02-28 2025-04-29 Tohoku University Blood flow field estimation apparatus, learning apparatus, blood flow field estimation method, and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4009334A1 (en) * 2020-12-03 2022-06-08 Koninklijke Philips N.V. Angiography derived coronary flow
CN116361650A (en) * 2023-03-15 2023-06-30 首都医科大学附属北京天坛医院 A method and system for calculating cerebral perfusion parameters based on deep learning

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8908939B2 (en) * 2008-09-30 2014-12-09 Koninklijke Philips N.V. Perfusion imaging
US20110229003A1 (en) * 2008-11-14 2011-09-22 Apollo Medical Imaging Technology Pty Ltd Method and system for mapping tissue status of acute stroke
US20120141005A1 (en) * 2009-06-05 2012-06-07 Faycal Djeridane Method for estimating haeomodynamic parameters by joint estimation of the parameters of a global perfusion model
US20170220760A1 (en) * 2014-03-31 2017-08-03 Heartflow, Inc. Systems and methods for determining blood flow characteristics using flow ratio
US20150272448A1 (en) * 2014-03-31 2015-10-01 Heartflow, Inc. Systems and methods for determining blood flow characteristics using flow ratio
US20180153495A1 (en) * 2014-11-24 2018-06-07 Siemens Healthcare Gmbh Synthetic data-driven hemodynamic determination in medical imaging
US20160148372A1 (en) * 2014-11-24 2016-05-26 Siemens Healthcare Gmbh Synthetic data-driven hemodynamic determination in medical imaging
US20190150764A1 (en) * 2016-05-02 2019-05-23 The Regents Of The University Of California System and Method for Estimating Perfusion Parameters Using Medical Imaging
US10964017B2 (en) * 2018-11-15 2021-03-30 General Electric Company Deep learning for arterial analysis and assessment
US20210133960A1 (en) * 2019-11-01 2021-05-06 GE Precision Healthcare LLC Methods and systems for an adaptive five-zone perfusion scan
US12288328B2 (en) * 2020-02-28 2025-04-29 Tohoku University Blood flow field estimation apparatus, learning apparatus, blood flow field estimation method, and program
US20240221151A1 (en) * 2020-05-13 2024-07-04 Icometrix Nv Computer-implemented method, system and computer program product for determining a vascular function of a perfusion imaging sequence
US20240193767A1 (en) * 2022-09-06 2024-06-13 Jubilant Draximage Inc. Method for obtaining arterial input function from region of interest

Also Published As

Publication number Publication date
EP4585158A1 (en) 2025-07-16
CN120298293A (en) 2025-07-11

Similar Documents

Publication Publication Date Title
JP7314025B2 (en) Deep Learning for Artery Analysis and Assessment
CN111492406B (en) Methods for training machine learning algorithms, image processing systems and image reconstruction methods
Cheng et al. Comprehensive multi-dimensional MRI for the simultaneous assessment of cardiopulmonary anatomy and physiology
EP3608877A1 (en) Iterative image reconstruction framework
Gillmann et al. Uncertainty‐aware Visualization in Medical Imaging‐A Survey
US20180330233A1 (en) Machine learning based scatter correction
Glatard et al. A virtual imaging platform for multi-modality medical image simulation
Vasylechko et al. Self‐supervised IVIM DWI parameter estimation with a physics based forward model
US20210022617A1 (en) Hemodynamic parameter estimation based on image data
US9443330B2 (en) Reconstruction of time-varying data
CN107133549B (en) ECT motion gating signal acquisition method and ECT image reconstruction method
US10628973B2 (en) Hierarchical tomographic reconstruction
US20190328348A1 (en) Deep learning based estimation of data for use in tomographic reconstruction
US9999401B2 (en) Determining the velocity of a fluid with the aid of an imaging method
US10970885B2 (en) Iterative image reconstruction
Tiago et al. A data augmentation pipeline to generate synthetic labeled datasets of 3D echocardiography images using a GAN
US12100075B2 (en) Image reconstruction by modeling image formation as one or more neural networks
Liao et al. Fast and low-dose medical imaging generation empowered by hybrid deep-learning and iterative reconstruction
Zhu et al. Temporally downsampled cerebral CT perfusion image restoration using deep residual learning
Dutta et al. Deep learning framework to synthesize high-count preclinical PET images from low-count preclinical PET images
US20250201407A1 (en) Prediction of a representation of an examination area of an examination object in a state of a sequence of states
US20250221670A1 (en) Method and system to compute hemodynamic parameters
Li et al. Airport: A data consistency constrained deep temporal extrapolation method to improve temporal resolution in contrast enhanced ct imaging
Garcia-Blas et al. Multi-bed stitching tool for 3D computed tomography accelerated by GPU devices
US20250292447A1 (en) Generation of a synthetic medical image

Legal Events

Date Code Title Description
AS Assignment

Owner name: GE PRECISION HEALTHCARE LLC, WISCONSIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GALAS, THIERRY;CHAMPION, THEO;GIROT, CHARLY EMMANUEL;SIGNING DATES FROM 20240105 TO 20240109;REEL/FRAME:066077/0574

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION COUNTED, NOT YET MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED